Metrics are ever so relevant in today's era of followers in which many of us measure the value of our life stories and content based on the numbers of likes we receive from others. Yet our story itself may be more "telling" or even entertaining at times than its ending, though still not generate much attention. In fact, most often, performance results in the workplace are communicated best through numbers and through the story's endings, rather than the story or the process.

Numbers reflect only the final act or final episode of the story of the performance. Yet, no matter how accurate or accepted they are by all of us, numbers are necessary for some "objective", fixed value by which to describe our subjective efforts, results, and impact of our work, as well as to test our progress over time. When numbers are expressed as performance metrics, they help us justify the story of our performance.

"Pictures may tell a story, but numbers tell how the story ends".

As a Medical Science Liasion (MSL) and MSL Trainer in the Medical division of the pharmaceutical industry for the last 15 years, I have personally been a part of the unresolved puzzle of measuring MSL Performance Metrics. The question of MSL metrics continues to be a heavily debated topic so I have decided to present a global and scientific perspective through my personal experiences and interviews of my Senior-level MSL colleagues from various companies to address this subject. In this article, I address the following three questions:

What parts of MSL activities should be included in their performance metrics?

How can MSL performance be quantified in a manner that is most scientific and relevant to the role?

When and how should the individual MSL metrics be communicated and used to rank MSLs across an entire team by Management?

Background: MSL function

The MSL is a role in the bio-pharmaceutical industry focused on learning and communicating the latest cutting-edge developments in healthcare while developing relationships with scientific experts, clinicians, or key opinion leaders (KOLs) within an assigned geographic area. The role involves connecting key scientific experts, researchers, and healthcare leaders with company's drug development activities. Ultimately, MSLs help advance healthcare and identify new treatment options that may change the stories of many patients' lives. However, the challenge that has persisted with this dynamic role has been how to quantify the primarily qualitative nature of the MSL job of developing relationships with healthcare experts and communicating scientific information.

The MSL is a dynamic function that involves new and different internal tasks and external relationships with healthcare leaders that continue to evolve every day. Thus, it is exactly this dynamic nature of this regional, remote-based job that makes it extremely difficult to predict MSL value prospectively and measure MSL metrics over time. So when designing and customizing MSL metrics in an organization, it is only appropriate that performance metrics for a job filled by healthcare professional scientists be as scientific as possible!

What part of the performance the MSL metrics are based on and how the final metrics are calculated over time determines how relevant and truly valuable the numbers are to the individual and to the company. More importantly, the true value of performance metrics to the individual and to the organization lies in its ability to predict and change behavior!

As a result, the MSL value in both advancing healthcare as well as the pharmaceutical company for which they work is reflected by the personal insights they communicate that cover as many "bases" or topics as possible per every interaction or "at bat" opportunity to develop a long-term professional relationship or scientific dialogue with physician experts, researchers, and healthcare organizations.

A dynamic role such as the MSL requires a correspondingly dynamic metric that is scientifically based and quantitative.

This article covers the following topics related to MSL metrics:

How is the MSL performance currently measured? What are the unmet needs in current MSL metrics? Why is the baseball "slugging percentage" concept more relevant than "batting average" in assessing MSL performance? How should an individual MSL performance be evaluated and communicated by Management? How should individual MSLs rankings on the team be calculated and communicated by the Management?

1. How is the MSL performance currently measured?

Traditionally, MSLs are primarily measured based on how many scientific experts (SEs) or KOL "customers" they meet with per quarter or per year out of those they had originally planned to see. The standards are 400 scientific exchange interactions per year, or roughly over 30 a month, with 3-4 average days of meetings in the field per week and 1-2 interactions per field day. Sometimes, a particular number of formal powerpoint presentations on a topic per year has been utilized as an MSL metric as well.

Some companies have utilized customer relationship management (CRM) databases that record all MSL interactions to require recording the time spent in a scientific interaction.

Other organizations survey the scientific experts and KOLs for whom an MSL is responsible through a third party to evaluate the Medical "customer's" perspectives. Many companies also assess qualitative "360 feedback" on MSL performance and their cultural "fit" from the internal colleagues relevant to the MSL's role and project(s).

There are various combinations or versions of these metrics that are used at different companies, however the primary metrics remain number of interactions and average days per week in the field with interactions.

2. What are the unmet needs in current MSL metric standards?

Metrics today are rarely discussed throughout the year, are absolute and non-specific across most companies. Because they are non-specific, they become irrelevant from the MSLs' "scientific" perspective. Furthermore, they are not clearly evaluated or communicated, and if communicated, they usually involve a long process of steps required, approvals, lots of open-text fields and qualitative remarks open to unlimited interpretations between the MSL and the management. So not only is the MSL metrics process designed to evaluate scientists empirically and non-specifically, but it is most often completed once a year in an arduous process that provides a cross-sectional or episodic view of an MSLs entire year of efforts and performance. As a result of the lack of clarity and relevance of the evaluation, the biggest dilemma of current metrics is the lack of actionable opportunities to change or inspire a specific behavior and growth following the evaluation. Thus, most companies still view MSL metrics from a performance, task-oriented culture rather than from a growth culture perspective.

The number of scientists MSLs meet with, the average time of each interaction, or average days in the field are the general, non-specific, measures used to evaluate MSLs that are all similar to a batting average formula in baseball, which is calculated as the number of hits per at-bats, or attempts, without regard for type of hits, such as doubles, triples, or home-runs which cover 2, 3, and 4 bases respectively in one at-bat rather than one base only. The table below summarizes three top reasons the current MSL metrics do not reflect the function's role over the longterm. Ultimately, today's metrics are limited in their ability to tell the full stories of MSLs' merit. Similarly, the batting average metric does not identify an all-around best baseball player as there are plenty of baseball players who are not "batting average" leaders, but who may lead the league in home runs, and are the most valuable players (MVPs) within their teams who lead their teams to championships.

Let's revisit why this approach to metrics is outdated and irrelevant. MSLs continuosly uncover new, emerging scientists on a daily basis, and add to their target list of "customers", which is a live document in constant flux. They deliver value via many different ways, and each of their deliverable may have different value to the company or to the physician or researcher. It may be more appropriate to meet with one KOL 4 times a year while another one 10 times a year, depending on the specific needs of scientific exchange. In addition, each of their scientific interaction with an expert may be different in length and format, and often involve discussions of multiple relevant topics. On one day, MSLs may respond to an unsolicited off-label question from a physician that their field Sales rep colleague is not legally allowed to answer. On another day, MSLs may update a clinical trial investigator on an investigational product. And yet the next day, an MSL may connect a new chief of the department of medicine at a major hospital in the region to the company as a new strategic advisor despite the Doctor's previous history of rejection of the pharmaceutical industry and no interest in a dialogue with the company. The very last example in the list above is not just a "hit" but it is a "home run" for the MSL and the company, but in the traditional "batting average" concept of MSL metrics used before and sometimes still used, such a "home run" just gets combined with all the other types of MSL scientific interaction "hits" in a customer relationship management (CRM) database regardless of the difference in their value into one category.

Next, let's examine the routine MSL activities which involve engaging in scientific exchange with physician experts closely. Important questions that MSLs and their management should ask themselves is: "Which topics were discussed and how many different discussions took place during a single MSL interaction?". As an MSL for 15 years in various big, specialty, small, rare-disease, and start-up pharmaceutical companies across different disease areas, I have learned how valuable and expensive (due to associated travel costs) each opportunity of an interaction is with a KOL regardless of the setting and the length of the dialogue. Therefore, my own routine metric as I plan for the meeting is to attempt to cover as many bases as I can in that single interaction. Thus, I may plan to discuss different aspects or perspectives of one single topic, or carry on multiple different topic or agenda item discussions in one single interaction to maximize the insights obtained. Learning how to read the person's cues during a conversation, including non-verbal cues, and then maximizing such opportunities seamlessly to transition from one to another item on the MSL agenda allows the MSL to deliver not just hits but home runs.

In addition to simply capturing data presented by KOLs, MSLs provide unique context and peer-to-peer reactions to the data from other attendee experts. Also, MSLs report scientific insights related to unpublished information regarding the investigational or real-world use of company or competitor products in a particular specialty field communicated in a private or public setting in MSL's presence. In addition, MSLs are considered to be the primary resource responsible for their territory-specific expertise and thus to be familiar with their entire line-up of scientist customers, hospital, and other key institutions relevant to the company. They monitor changes in leadership or roles of the scientists, and any critical developments over time that affect business and patient care. Thus, eventually, MSLs learn the specific scientific needs, behaviors, and culture of their experts, which may vary greatly from one geographic territory to the other. One MSLs territory may require more frequent travel by car compared to airplane compared to another MSL, a greater overall number of experts, researchers, and therefore attempts or "at-bats". An MSL may differ from other MSL teammates in the research or educational capabilities of the hospitals in their assigned territory. Thus, these are all the variables that are difficult and impossible to adjust with objective, quantitative metrics.

Some companies have utilized CRM databases that require all MSL interactions to record the time spent in a scientific interaction. However, there are cultural differences across institutions and geographic regions as well as circumstantial factors because of which such a metric be unreliable and irrelevant. First, logging the time spent by an MSL with a KOL is intrinsically inaccurate unless the MSL is being physically tracked by another person or device to record the exact time of the discussion. At best, the time recorded is an estimate prone to recall bias. The time spent in an MSL scientific interaction is unreliable because even when it is accurate there is no evidence that time spent with a scientific expert directly relates to "home run" deliverables that are valuable to the company. I have certainly engaged in some valuable group dinner discussions with groups of experts that took over three hours that led to collaborations, but also others that did not. And I have also initiated brief 3-5 minute strategic interactions at scientific meetings with target or non-target experts that led to extensive fruitful collaborations. Furthermore, there are cultural or behavioral differences among different hospitals and geographic regions that affect the probability of a long discussion.

To add complexity to the time factor of MSL interaction metrics is the presentation or discussion format factor. Generally, the strongest relationships people form are those that are personal and based on live, in-person interactions.

However, people communicate via more formats than ever today in our distracted, digital global village without borders.

Thus, sometimes, telephone conversations or online virtual slide presentations may be more valuable and cost-effective preventing unnecessary business costs of travel and time to travel for live meetings. As a result, MSLs learn to directly deliver valuable insights to healthcare leaders and experts.

And over time, MSLs master the art of presenting the right type and amount of evidence-based, fair-balanced information in the right format to the right individual at the right time.

Clearly, the number of MSL relationships formed are a strong baseline but they are nevertheless subjective and qualitative, whereas the ultimate MSL value proposition is based on the final numbers of collaborations of variable weight in accordance to the company's priorities that are updated regularly.

Again, it is important to distinguish the pictures of MSL activities from the numbers. The picture of MSL relationships and friendships may tell a great story in their own lives in the long term, but numbers of different types of collaborations such as participation in company sponsored trials, internal or external training or preceptorship, advisory boards, company product data presentations, publications, among many others tell the MSL how that story ends that particular year from the MSLs company's perspective.

Many different Medical Affairs divisions within pharmaceutical companies have made some excellent attempts at best capturing the MSL value proposition through metrics. However, I strongly believe that this topic requires not just consolidation of the best pieces of different models into one, but a drastic shift from an unclear qualitative, "picture" interpretation type of MSL metric model to a "numbers" type of quantitative model that translates the MSL pictures as scientifically as possible into relevant evaluation of their value. One of the major problems across different MSL organizations is a vicious circle around the dreaded, "taboo" topic of metrics for this role. This cycle involves MSL scientists who naturally question and are never happy with how they are evaluated as they realize that most of the value they bring is qualitative, yet their performance is judged based on quantitative metrics that do not often correspond to their MSL merit. The management is required to provide MSLs with their final individual performance rating and to provide themselves a relative performance rating of MSLs vs. each other behind the scenes. As a result, company management creates metrics but most often does not prioritize the critical and controversial discussion of MSL performance metrics that is heavily debated on many MSL teams. Instead, in order to encourage hard work and collaboration on the MSL teams, the management usually attempts to recognize the qualitative nature of the MSL role to appease their teams. Thus, MSL metrics are too often passed over by other topics in an effort to make everyone "happy" and still motivated to work. As a result, management across different companies usually provides non-specific, general, static MSL metric goals usually after MSLs have already started to immerse themselves into the role, and usually once a year with typically 1-2 periods of evaluation of performance, at mid-year and at the end of the year. Most often, the goals communicated are unclear leaving lots of room for imagination, debate, and "implicit" team competition among accomplished scientists. As a result, MSLs often initiate or continue their work meeting scientific experts often without a clear understanding of their performance expectations or clear methods by which they will be evaluated. Usually, management always includes MSL behavior or culture in its performance metrics, but this component, like technical performance, is rarely discussed or prioritized to great extent. The individual career development plan for MSLs is a factor of the metric that some companies include, though rarely at smaller companies due to the fast-paced, volatile industry environment focused on company performance results first and foremost before individual career development. I have rarely observed MSL performance metrics being discussed with the team early or being presented in detail, and in different weight of one performance marker versus another before MSLs start to work. Thus, there is a clear opportunity to work prospectively to redefine metrics as specifically and as scientifically as possible early and regularly, and to make the metrics process as explicit and prioritized as possible.

An MSL Metrics Case Study

When I worked in an exemplary specialty company in gastroenterology, Salix Pharmaceuticals, it grew tremendously in size and in its product pipeline. Salix became a huge success and model for many companies, acquiring two other companies before it in turn was acquired by Valeant. Salix achieved an unprecedented stronghold in GI, and continues to operate under its well-established name as a division of Bausch Health. At Salix, the management dedicated extensive amount of time as a team to creating, customizing, and communicating individual MSL metrics together early on in the year in order to set the tone and one common team direction for the rest of the year. In addition, management clearly communicated all the components of the MSL performance metrics, their relative percentages (KOL development - 50%, company sponsored trial support - 20%, etc.) to the MSLs, and left other components of the goal up to the MSL individual's own territory's needs. MSL management involved MSLs in setting performance definitions and setting individual goals together as a team in a transparent manner. The MSL managers provided ample opportunities of individual and leadership development, as well as a series of behavioral training courses to reinforce a strong company culture. However, as it is customary across companies in the industry, the relative weight of meeting the behavioral/culture goals was not discussed prospectively in relation to the technical performance of the MSL, and the questions assessing MSLs behavior and cultural fit only appeared during the performance review. Overall, this company was different from most others I worked in because it did not present metrics as a dreaded but "necessary evil" but instead featured MSL metrics as a science. The Salix Medical division attempted to take full advantage of frequent MSL performance and milestone discussions that were quarterly, and not yearly or bi-annually (twice a year). In addition, MSL management inspired MSLs to work harder by creating MSL of the Year and other Medical Affairs MSL awards. This company certainly prioritized MSL metrics and took them to a new, scientific dimension. Other companies I have worked in such as AstraZeneca also certainly offered some extensive, valuable resources to improve performance, specific behavior, and customize MSL metrics. However, when I meet my MSL colleagues from most pharmaceutical companies, they may often describe a particular practice or environment they loved at their previous organization, however I have never yet met a single MSL who has ever told me that he or she worked in a company that had the "best MSL metrics" or an MSL metric model that truly captured all of the aspects of this dynamic role.

Clearly, no performance metrics can ever be expected to fully reflect the MSL role.

MSL metrics should be not just representative of the function but also should inspire growth, progress, and a particular behavior.

But for a systematic, scientific, and dynamic culture of growth and truly high performance to take place, there is a way MSL and other functional metrics may be “quantified”, streamlined, and simplified. Metrics may be prioritized while decreasing the time needed to complete them and increasing the frequency and relevance of performance evaluations to the organization.

3. Why is slugging percentage more relevant to the MSL role?

As the comparison table below demonstrates, the Slugging Percentage (SP) model, unlike the common batting average model used, adjusts between different types and weights of MSL accomplishments over a given period of time. The SP concept essentially encourages MSLs to cover and record more "bases" per interaction or opportunity, whether with an external KOL or in an internal project.

Because each of the MSL "hits" delivers different outcomes with different weight in value to the healthcare leader and to the company, a "Slugging Percentage" (SP) model of MSL metrics assigns different weight to adjust the different types of MSL deliverables as defined by the team and updated as needed.

The table above shows how these statistics are calculated in baseball as some of you may not familiar with baseball, which is not just America's pastime, but is also a game of hundreds of different statistics. Unlike "Batting Average", Slugging Percentage is the ratio of total bases per at-bat, and therefore a home-run carries four times the weight of a single, for example as a home-run is a type of hit that allows the player to reach all four bases at once and score for the team, rather than a single, or a base hit which allows the player to reach one base without resulting in a score.

Also, an SP model of MSL metrics may be reflected through not just the interactions but the numbers of overall topics discussed and numbers of discussions per interaction to better represent an MSL's ability to cover many bases during a single meeting. This measure describes an MSLs efficiency and productivity, not just effort. Thus, the SP MSL model is a type of "Medical Productivity Index" or MPI that reflects the more dynamic and representative view of the daily role of the MSLs. It requires additonal work, strategy, and maintenance upfront compared to the current metrics used, however, it offers greater ease and likelihood of predicting and changing MSL behaviors and activities to inspire growth and progress within their assigned territories in the long-term.

4. How should MSL performance be evaluated and communicated by Management?

Metrics expectations and evaluation methods should be fully disclosed and communicated early. In order to inspire collaboration, Management needs to set a general direction or destination for its team to follow. Thus, MSL metrics expectations need to be communicated early to ensure efficiency and productivity throughout the year prior to the performance review period. MSL metrics describe an episodic, cross-sectional estimate of an MSL performance, and are never 100% accurate or reflective of someone's merit or overall value. Thus, efforts are needed to determine an evaluation that is as specific, longitudinal, and relevant to the dynamic MSL role as possible over time. The MSL Metrics need to be spelled out clearly in specific detail and communicated early by the Management in order for MSL metrics to serve a larger purpose of inspiring specific MSL behaviors, consistent performance, and growth rather than to merely rank individuals or determine bonuses at the end of the year. Overall, the MSL Performance Metrics discussion should not be an episodic and dreaded discussion by all parties involved but a welcomed, transparent, ongoing topic of conversation that will ensure an MSL organization culture and behavior of "growth".

In order to inspire collaboration, Management needs to set a general direction or destination for its team to follow. Thus, MSL metrics expectations need to be communicated early to ensure efficiency and productivity throughout the year prior to the performance review period. MSL metrics describe an episodic, cross-sectional estimate of an MSL performance, and are never 100% accurate or reflective of someone's merit or overall value. Thus, efforts are needed to determine an evaluation that is as specific, longitudinal, and relevant to the dynamic MSL role as possible over time. The MSL Metrics need to be spelled out clearly in specific detail and communicated early by the Management in order for MSL metrics to serve a larger purpose of inspiring specific MSL behaviors, consistent performance, and growth rather than to merely rank individuals or determine bonuses at the end of the year. Overall, the MSL Performance Metrics discussion should not be an episodic and dreaded discussion by all parties involved but a welcomed, transparent, ongoing topic of conversation that will ensure an MSL organization culture and behavior of "growth". There is no single perfect, most representative metric for any function, including the MSL role. Thus, different types of primary and secondary metrics should be utilized and averaged, or used as tie-breakers if needed. An MSL metric of efficiency should always be considered for this role, and defined as the number of bases covered by the MSL per attempt (at-bat). Translated into the MSL language, the numbers of discussion topics per interaction is an important measure of efficient use of time during a scientific exchange. Implementing a database or running reports of only the number of scientific exchanges or meetings an MSL had with a KOL decreases the ROI (return on investment) of metrics or any CRM databases.

Thus, different types of primary and secondary metrics should be utilized and averaged, or used as tie-breakers if needed. An MSL metric of efficiency should always be considered for this role, and defined as the number of bases covered by the MSL per attempt (at-bat). Translated into the MSL language, the numbers of discussion topics per interaction is an important measure of efficient use of time during a scientific exchange. Implementing a database or running reports of only the number of scientific exchanges or meetings an MSL had with a KOL decreases the ROI (return on investment) of metrics or any CRM databases. MSL metrics evaluation parameters and methods must be carefully designed to ultimately change behavior of the end users. After particular type of metrics have been defined consistently and implemented for a given period of time, the definitions, categories and methods of input or output need to be evaluated for relevance to the role and to the organization, and updated accordingly with new, specific, and consistent definitions by the MSL end users and Management. Metrics that are implemented through a CRM database as an isolated, administrative task, or a preventative documentation tool to avoid trouble during a potential audit serve little if any value. Instead, metrics become valuable only if they can predict and change behavior to evaluate growth and progress.

After particular type of metrics have been defined consistently and implemented for a given period of time, the definitions, categories and methods of input or output need to be evaluated for relevance to the role and to the organization, and updated accordingly with new, specific, and consistent definitions by the MSL end users and Management. Metrics that are implemented through a CRM database as an isolated, administrative task, or a preventative documentation tool to avoid trouble during a potential audit serve little if any value. Instead, metrics become valuable only if they can predict and change behavior to evaluate growth and progress. Metrics should be defined, regularly updated, and designed to be customizable to the team's changing needs. Metrics can become valuable when they best reflect the nature of the MSL role, when they are defined, redefined, and maintained, and evaluated consistently.

Metrics can become valuable when they best reflect the nature of the MSL role, when they are defined, redefined, and maintained, and evaluated consistently. Metrics should include regular and frequent evaluations that are short, specific but not time-consuming. Performance results should be communicated regularly and frequently, abour 2-4x/year, preferably quarterly (4x/yr). The key concept of dynamic metrics is that they are meant to inspire growth, as progress and improvement are overall goals of any company, disease state, territory, and MSL, including those that are most seasoned and accomplished. When appropriate definitions and goals are set early, individual MSLs and their management save time during the evaluation period. If the evaluation period is simplified and expressed with numbers instead of extensive open-text that is subjective and open to interpretation, it can allow for more frequent evaluations and ultimately more opportunities for growth and improvement!

Performance results should be communicated regularly and frequently, abour 2-4x/year, preferably quarterly (4x/yr). The key concept of dynamic metrics is that they are meant to inspire growth, as progress and improvement are overall goals of any company, disease state, territory, and MSL, including those that are most seasoned and accomplished. When appropriate definitions and goals are set early, individual MSLs and their management save time during the evaluation period. If the evaluation period is simplified and expressed with numbers instead of extensive open-text that is subjective and open to interpretation, it can allow for more frequent evaluations and ultimately more opportunities for growth and improvement! Metrics must be captured and evaluated extensively by the appropriate people. The MSL metrics should be based on three different sources: 1) the MSLs, 2) MSL management and relevant colleagues, and 3) the MSLs' KOL "customers", or healthcare leaders in the assigned territory. For example, once expectations and definitions are clearly defined, the MSLs should report their activities consistently and regularly to deliver accurate insights of their performance. The MSL Management should also capture their individual MSL metrics from the Management perspective and cross-reference them with MSLs own detailed assessment. To do this, MSL Management should engage other Management colleagues who directly observed or collaborated with the individual MSL to capture their perspective. The MSL "customers" in the field play a critical role in assessing the MSL value proposition from their perspective thus they ought to participate in evaluating MSLs.

The MSL metrics should be based on three different sources: 1) the MSLs, 2) MSL management and relevant colleagues, and 3) the MSLs' KOL "customers", or healthcare leaders in the assigned territory. For example, once expectations and definitions are clearly defined, the MSLs should report their activities consistently and regularly to deliver accurate insights of their performance. The MSL Management should also capture their individual MSL metrics from the Management perspective and cross-reference them with MSLs own detailed assessment. To do this, MSL Management should engage other Management colleagues who directly observed or collaborated with the individual MSL to capture their perspective. The MSL "customers" in the field play a critical role in assessing the MSL value proposition from their perspective thus they ought to participate in evaluating MSLs. Metrics must include performance + behavior to also encourage collaboration and growth within the organization. Years of experience in the pharmaceutical, or any industry, in any workplace setting teach us all that credentials, skills, background, and tenure have absolutely no relationship to behavior or culture. How many times have you worked with an accomplished, experienced colleague you lost respect for because of how he/she behaved towards you and others? I purposefully call the second component of my metrics model behavior instead of culture because culture of a company implies character, which is largely unchanged in adults. Company culture most often targets employee character, which is a wide spectrum, as it is dependent on circumstances and environment and therefore changes in how it is expressed all the time. Thus no cultural "fit" can ever truly be enforced. A cultural fit is a perception, but not reality in a team of adults. Behavior, on the other hand may be changed in adults at work. A specific type of behavior may be inspired and enforced by a specific direction or slogan communicated by Management (e.g. "winning the right way" - AstraZeneca). Thus, my company culture equation is culture = character x behavior, in which behavior is a critical "multiplier" factor. High individual performance has little relevance to company performance, and does not always predict team performance or productivity. On the other hand, high individual performance combined with appropriate behavior are much more relevant to company performance because together, they define a synergistic and productive environment of collaboration. Thus, well-designed MSL metrics that combine performance and behavior in equal weight are ultimately going to provide a more comprehensive and accurate picture of the overall impact of individual performance on the team and organization, which is the ultimate goal of any metrics within any company.

Years of experience in the pharmaceutical, or any industry, in any workplace setting teach us all that credentials, skills, background, and tenure have absolutely no relationship to behavior or culture. How many times have you worked with an accomplished, experienced colleague you lost respect for because of how he/she behaved towards you and others? I purposefully call the second component of my metrics model behavior instead of culture because culture of a company implies character, which is largely unchanged in adults. Company culture most often targets employee character, which is a wide spectrum, as it is dependent on circumstances and environment and therefore changes in how it is expressed all the time. Thus no cultural "fit" can ever truly be enforced. A cultural fit is a perception, but not reality in a team of adults. Behavior, on the other hand may be changed in adults at work. A specific type of behavior may be inspired and enforced by a specific direction or slogan communicated by Management (e.g. "winning the right way" - AstraZeneca). Thus, my company culture equation is culture = character x behavior, in which behavior is a critical "multiplier" factor. High individual performance has little relevance to company performance, and does not always predict team performance or productivity. On the other hand, high individual performance combined with appropriate behavior are much more relevant to company performance because together, they define a synergistic and productive environment of collaboration. Thus, well-designed MSL metrics that combine performance and behavior in equal weight are ultimately going to provide a more comprehensive and accurate picture of the overall impact of individual performance on the team and organization, which is the ultimate goal of any metrics within any company. Metrics must cover all bases. In order for metrics to be meaningful the methods and definitions of evaluations have to be specific, comprehensive, and as relevant as possible to the role to be valuable over time.

In order for metrics to be meaningful the methods and definitions of evaluations have to be specific, comprehensive, and as relevant as possible to the role to be valuable over time. Individual MSL metric results should be communicated transparently on an individual basis, and team metrics in a public open forum team format. An action plan to adjust individual and team expectations should always follow metric results discussions.

EXAMPLE: MSL Medical Productivity Index (MPI) - the "MSL Slugging Percentage" Model

Definition

MSL Medical Productivity Index (MPI) - MSL Performance (50% ) Evaluation + MSL Behavior (50%) Evaluation

Sources

MSL's self-reported Metrics validated by MSL Management - Performance

MSL KOL scientific customers surveys - Performance & Behavior

MSL Management surveys - Performance & Behavior

Methods

Performance expectations are prioritized prospectively. There is a primary metric that is selected and communicated to the team prospectively and updated if needed at different evaluation periods. In addition, there are secondary metrics that are selected and communicated to the team, which are averaged in with the primary metric at every evaluation period (quarterly, preferred) and may be used as tie-breakers if and when ranking MSLs on a team.

Definitions of covering 1 vs. 4 bases in a single interaction are agreed upon by the team. For example, a single base, or 1 point value may be defined to correspond to a scientific interaction which includes a response to an unsolicited request for Medical Information or an MSL insight reported from the interaction, and higher point values to other MSL outcomes that occur less frequently but have direct impact on the company, such as facilitating an educational MSL roundtable/Medical Advisory Board, for example.

The "Slugging Percentage" system is imperfect and fully quantitative, yet it is dynamic, customizable and reflective of the relative differences and likelihood of one vs. another type of MSL deliverable, or an MSL "hit". For example, "singles" worth 1 point each will make up most of the MSL hits as defined above, with the number of MSL scientific interactions contributing the most to the overall total of points, or "bases" covered by the MSL over a given period of time, and "home runs" worth 4 points each will likely occur less frequently as expected in both the MSL universe as well as in baseball. The relevance and accuracy of this type of metric will depend on MSL management's careful identification of definitions of the different deliverables proportionate to their relative frequncy and point totals assigned. For example, a "Home Run" definition should be a realistic, attainable MSL deliverable that is given a point total that is related to its relative likelihood of occurrence vs. a routine scientific exchange discussion. If an outcome is estimated to occur 4 times less frequently than a KOL interaction with no other deliverables, then it should be assigned 4 points, if it is 10 times less likely to occur during the period of evaluation, then it could be assigned 10 points.

Field coaching reports may be used as a reference to assist with quarterly MSL performance and behavior surveys

Internal certifications may be used as a reference to assist with quarterly MSL performance and behavior surveys

PERFORMANCE METRICS (50% of the Total Score)

The primary performance metric in the SP MPI model is total outcomes expressed as total points based on the MSL Activity Point Total definitions table above over a given period of time. This represents the total bases covered, or total deliverables with impact by the MSL. The secondary metrics that follow are the "Slugging Percentage" or the ratio of the total outcomes/attempts with attempts being defined as a sum total of MSL-KOL interactions as well as internal projects attempted or assigned; followed by total attempts; numbers of discussion topics/interaction; possibly number of formal presentations (not listed in the Performance Metrics Sample table below), among others.

Other secondary metric analysis to consider and add to the current list of filters of MSL performance metrics 1) KOL Target Definitions - Target vs. Non-Target HCP (healthcare professionals) interactions, considered as at bats, or attempts at MSL deliverables, or hits. Target KOLs are those identified prospectively at the start of every quarter. Non-target KOLs are new KOLs identified with whom MSL has had an interaction. Consistent addition of new non-target KOLs reflects MSLs consistent research and growth within the territory. 2) Routine vs. Non-Routine MSL-KOL interactions , in which routine interactions are those in-office scientific discussions with the KOL, while non-routine interactions are discussions that take place at scientific conferences, Grand Rounds, over a meal or in social circumstances. 3) Types or KOL healthcare professional (HCP) specialists and sub-specialists within a field whom the MSL interacted within a field. 4) Types and frequencies of discussion topics per interaction . Discussion Definitions and Frequency in Scientific Exchange Interactions - number and type of bases covered in a single interaction. The numbers of overall discussions per interaction is the primary metric in this sub-category, not just the overall number of interactions. It is a critical measure of the MSL value demonstrated by an ability to cover many bases in a single interaction strategically.

(healthcare professionals) interactions, considered as at bats, or attempts at MSL deliverables, or hits. Target KOLs are those identified prospectively at the start of every quarter. Non-target KOLs are new KOLs identified with whom MSL has had an interaction. Consistent addition of new non-target KOLs reflects MSLs consistent research and growth within the territory. 2) , in which routine interactions are those in-office scientific discussions with the KOL, while non-routine interactions are discussions that take place at scientific conferences, Grand Rounds, over a meal or in social circumstances. 3) and sub-specialists within a field whom the MSL interacted within a field. 4) . Discussion Definitions and Frequency in Scientific Exchange Interactions - number and type of bases covered in a single interaction. The numbers of overall discussions per interaction is the primary metric in this sub-category, not just the overall number of interactions. It is a critical measure of the MSL value demonstrated by an ability to cover many bases in a single interaction strategically. Internal MSL Project Performance Metric - part of the Primary MSL Slugging Percentage Metric above. The weight of this deliverable will be determined based on the number of people the project deliverable involved and directly impacted after project approval by MSL manager. A project is determined based on the unmet need identified either by MSL or MSL management. A project is considered a single at-bat, or attempt at a deliverable, and its ultimate impact and final evaluation determines whether or not it is a single, double, triple, or home-run. Every project involves an evaluation and feedback after being completed, agreed upon between the MSL and MSL manager and gets added into the MSL's slugging average primary metric above. The dynamic nature of the SP MPI concept combined with frequent, quarterly metrics evaluations has important implications to adjust for differences in unmet needs and capabilities that exist between different territories. For example, if an MSL#1 has a territory with significantly less KOLs, research centers, or scientific needs vs. other MSLs, MSL#1 has less "at-bats" or opportunities to score points. Therefore, this MSL#1 should become a primary candidate for internal MSL projects to still provide value to the company and have additional opportunities to cover more bases and score more performance points.

A minimum number of at-bats, or interactions per quarter or given period of time is necessary, just like in baseball, in order to qualify for being ranked. This minimum number of at-bats has o be determined per quarter, should be easily attainable by all, and not be a stretch goal by any means. Thus, in this model, the number of interactions an MSL records is a minimal standard, and not a final, primary metric as occurs often in most MSL teams.

Below is a Table of an MSL Medical Productivity Index (MPI) Performance Scorecard

MSL BEHAVIORAL PERFORMANCE (50% OF TOTAL SCORE)

No performance evaluation, regardless of how quantitative, should be deemed complete without a quantified assessment of the behavior of the MSL employee. Well-thought out, detailed questions with graded, Likert-type scales that are multi-directional (positive response – higher score on some questions, negative response – higher score on others) can make a difference in quantifying performance as well as behavior

Anonymous 5-question surveys (2x - 4x per year) of external MSL stakeholders conducted by a third-party vendor on the value of the resources provided to their function, timeliness, communication accuracy and quality

Internal, quick 10-question surveys on behavior surveys of MSL’s managers, cross-functional partners with feedback relevant to MSL projects and activities at least 2-4x/yr on MSL performance behavior

Examples of behavioral questions in MSL Management Survey:

How likely is this MSL to spend time after work privately with a teammate one on one to share knowledge in a topic or resource (Range of 1: least likely to 5: most likely, 5 most positive score). How likely is this MSL to choose a project for its optics or convenience rather than for authentic interest and relevant experience? (Range of 1: least likely to 5: most likely, 1 most positive score)

KOL Survey Distribution is conducted quarterly by a third party with random KOLs with previously documented MSL interactions. It is possible to measure growth of a particular MSL-KOL relationship by conducting serial surveys with the same KOL over time. KOL Survey Results should communicated quarterly directly with the MSL quickly upon completion because feedback is valuable when it is timely or immediate.

MSL Management Survey results should be communicated quarterly with the MSL quickly upon completion to ensure relevance of value of feedback.

Below is a Table of a Sample MSL MPI Behavior Scorecard

5. How should individual MSLs rankings on the team be calculated and communicated by the Management?

The table below is an example of of a small team of 4 MSLs that follows a quarterly MPI model that consists of a 50% performance and 50% behavior split rating. Clearly, MSL #1 is by far the highest-performing individual on the team, ranked #1 in Performance, however due to a last-place ranking in Behavior, his/her final rank on the team after Q1 is #2. Notice that the table below only presents final performance of the MSLs after the first quarter. So when the definitions are in place in a template format and agreed upon by the team, it is possible to perform more frequent evaluations that are short, brief, and not drawn-out. Once the MSLs are ranked, individual rankings should be communicated transparently and immediately upon availability either on an individual or a team basis regularly, every quarter (preferably) as well as at the end of the year together with the team's overall performance as well.

Although this model may seem to be complex, and time-consuming at first glance, it is in reality a concept that saves time and resources in the long-term as it is based on surveys and template systems with minimal free text or qualitative, subjective discussions that often take place 1-2 times a year and that are usually drawn out over several weeks in most pharmaceutical companies. More importantly, this model is an attempt at a more bottom-line, fair and balanced, growth and team-oriented approach to scientifically measuring health-care professionals and scientists.

The MPI described in this article is a dynamic process that may be adapted for different environments and circumstances. For example, unlike the table shown above which ranks MSLs against others on the team, the model may be modified to rank those MSLs highest who demonstrate the largest 1) growth and 2) consistency in their own territory in terms of particular primary, secondary or overall metrics of interest to Management over time such that they serve as their own controls, scientifically speaking.

Below is a summary list of key considerations or advantages of the Slugging Percentage Perspective of MSL Metrics presented in this article.