134 SHARES Facebook Twitter

By John Abreu, CSCS.

Athlete monitoring and data tracking has recently become an increasingly talked-about aspect in the high-performance sports landscape, and while every coach has done some monitoring in their time, whether it be through conversation or observation, the advent of new sports technology has opened up a vast number of new avenues through which to monitor one’s athletes. The only problem with the newly introduced technological modalities are the costs associated, which may limit the availability for coaches to use them. However, with a little hard work on the back end, and some creativity, coaches could very easily turn a “have-not” program into a “have-some,” using readily available software. This article is based on my experience using monitoring at the collegiate level.

Plan – What data are you trying to collect? What is useful? What is not?

Before sitting down for our first concept meeting the staff I was part of did quite a bit of research on athlete monitoring questionnaires – Mainly the Hooper-Mackinnon and the RESTQ-Sport. We used already established variables, but allowed the process to be very creative, and as such, initially came up with over 25 variables to include in our surveys. Some variables, such as sleep quality and quantity, were easy to include and were met with no hesitation by anyone from our staff. Conversely, other variables were a tough sell, and not easy to justify – For example, being largely a commuter campus, length of commute was one of our initial suggestions. As we pared down questions, we were clear that we did not want to create a survey that took a long time to complete (or interpret, for our sake), and would have the athletes fall asleep from boredom while they completed it. Ultimately, we whittled down the list to 10 variables that we thought would provide the sport coaches and our staff with some valuable feedback, while also choosing variables that we thought we could make an impact on – whether through intervention of our own, or through education. Acknowledging that they may be issues that our 10-question survey would not account for, we added a fairly straight-forward “comments” section.

To our surveys we also added a second portion asking athletes to highlight body parts in which they were experiencing pain or injury, hoping to identify any areas that we had the opportunity to improve through our training programs, and to help our athletic training and physiotherapy staff identify potential or unresolved issues.

Designing the Survey – Making it concise for the staff and the athletes

In the design of the survey, we had two main goals in mind – Making it easy for the athletes to complete and follow, while also making it easy for us as staff to implement, collect, and analyze. Since it was to be the first time we tried to implement the surveys, we chose to carry out our surveys in a paper and pen format. While this was not the most environmentally friendly manner in which to carry out our surveys, it was the easiest way to implement given our resources.

As a scale, we chose that each question would be out of 5 – Again, making it easier for us to calculate a score, an making it so that the athletes did not have to think too much when distinguishing their appropriate answer. They layout of the survey was such that the questions followed a numbered sequence, and like questions were grouped together. The scales for each question had descriptor statements for the lowest and highest rating, ensuring the athletes always had a guide as to what we were looking for with each question.

Aesthetically, we ensured that the surveys looked very professional, with the colour scheme following that of our athletic teams. This was a subtle aspect of the design which drew positive reaction from the athletes and comparisons to professional teams from incoming players.

Communication & Implementation – Ensuring Staff and Athletes are On Board

Through the conception of our surveys, we realized that the way we communicated our intent and planned delivery would be an essential step in ensuring the accurate implementation. We knew that if we did not provide a well thought-out explanation to the coaches, they were less likely to allow us practice or meeting time to implement the surveys. We also wanted them to be aware that this was not a tool through which we would try to implement changes to their practice routines, merely one that would complement their (and our) intuition about the status of the athletes, and gain some insight as to where positive changes could be made.

Getting the athletes on board was another step which posed its own challenges, and we knew we would encounter a variety of issues – Apathy, compliance, and desirability bias among them. To counter these potential issues, we were thorough in our explanation that this monitoring tool was there to improve their experience as a student-athlete, to make sure we knew if there were issues that needed to be addressed, and to best formulate solutions if any issues arose – We did not want them to think we were passing on information by which a coach could label a player “soft” or “high-maintenance.” We also asked our athletes to communicate openly using the “comments” section of our survey. This was a place where they could elaborate or clarify issues, as well as share ideas of what could be done better. One thing that became apparent through the course of the first season we implemented the surveys was the need to keep the athletes updated on the entire process in order to maintain their compliance. As such, we granted interested athletes the opportunity to view their previous scores, but due to privacy and desirability bias concerns, we opted to not show them the results of the team during the course of the season. Prior to our second season of implementation, we did show athletes the graphical representation of the team over the course over the previous season. However, we made sure to keep results confidential, blacked out names on the display, and provided it without any decipherable order (i.e. alphabetical, positions, roster numbers).

As for implementation, in concert with the sport coaches, we chose to do so just before practices or at the start of team meetings – depending on what the coach thought was appropriate. This set-up allowed us to have the greatest number of players together at the same time, and hold variables like time of day, constant for each team. On average, we conducted the surveys every 2-3 days to ensure it didn’t become a tedious task for the athletes, and chose not to conduct any surveys on game days so as to not disrupt the athletes’ regular routine.

Analyzing and Interpreting the Data – Making sense of the numbers

The analysis of the data was done entirely on Excel. The wellness survey values for each athlete was entered, giving us a value out of 50 for each player (computed to a number out of 100), a team average for each survey question, as well as a team average as a whole. We came up with a formula to give us a value for the pain survey, and then combined the two values for an overall value. Due to the work we put in on the excel document on the back end, as soon as we entered the data we were presented with bar graphs for the wellness survey, the pain survey, as well as the combined score. We also tracked the overall scores longitudinally, with each day being assigned a colour between red (bad) and bright green (very good), depending on the value.

The visual output allowed us and the sport coaches to quickly and efficiently digest the data by comparing the bar graphs and colour output of the overall values. We also scanned for team trends for a particular value on the wellness survey.

As we started to accumulate more data, we started to take a mean value for each player, as well as a standard deviation, in order to give us an idea of improvement (or deterioration), as well as daily variance.

Our Learning Outcome – Reflection on Execution

As a whole, the best outcome of implementing the surveys was the learning experience it provided us as coaches. Analyzing team trends for a given wellness value proved to be one of the best indicators as to what we could affect change on. For example, the surveys showed that a vast majority of athletes were not hydrating enough throughout the day. In order to remedy this situation, we suggested to the sport coaches that water bottles be included as part of the kit players received. This simple action resulted in improved hydration scores as measured by our survey. During implementation, through conversation with our sport coaches, we also found the need to individualize surveys for other values coaches were interested in – For example, one coach asked for there to be a value for “academic stress” on the wellness survey.

The design of the survey also changed slightly, and we found that when dealing with college-aged athletes, something as simple as ensuring the body map on the pain survey had features to distinguish front from back (with a smiley face), made our job of interpreting the data much easier.

Lastly, we saw value for also interpreting a given athlete’s information as an individual – Rather than comparing him or her to the rest of the team, we compared them to their previous results, and to their variance. This should have been a given, as some athletes will rate themselves much more stringently.

Perhaps the biggest surprise to us was to see how much a big win or loss impacted the results for the survey as a whole, as well as individual questions. Even if two games were played under similar conditions, at the same time of day, same location, and same time during the academic calendar, players would rate themselves much higher after a win than after a loss.

As mentioned previously, we did not want to collect data for the sake of data, so after carrying out the surveys, and creating some interventions to better serve the wellness of the athlete, we sat down and further refined the survey to better identify areas of opportunity. This process was best described by Dr. John P. Sullivan at the Boston Sports Medicine Performance Group Summer Seminar this past May, and is outlined in the diagram below.

Final Thoughts – A Realm of Possibility

There are a few pieces of advice that I would share with coaches looking to implement their own surveys. First off, start with something relatively simple. Secondly, remain in constant communication with the coaches in order to ensure that the data is relevant to them, and how to shape the process in order to best serve them.

Lastly, when you have a good survey in place, start to think outside of the box in terms of data collection, and see what data you can collect to better serve your athletes. Using one of our teams as an example, we wanted to look at the relationship between minutes played and survey scores (at a glance, not much relationship, but we wanted to look at the effect it had on individual players). We also wanted to look at quantifying training load for practice, and use it as a learning experience for our sport coaches, as well as a means of tracking total work load over the course of a season.

Ultimately, with a little work on the back end, good communication with our sport coaches, and a good relationship with our athletes, we were able to create a working system which both allowed us to monitor our athletes, and was open to further opportunities for improvement.