The good people at Pro Football Focus spend enormous amounts of time breaking down every player’s performance on every individual play throughout the season. In the end, players can then be given a final rating somewhere between zero (poor) and 100 (elite). If you want to learn more about their methodology, you can read PFF’s Player Grade overview.

Here’s a portion of what you will read if you click on that link:

On every play, a PFF analyst will grade each player on a scale of -2 to +2 according to what he did on the play. At one end of the scale you have a catastrophic game-ending interception or pick-six from a quarterback, and at the other a perfect deep bomb into a tight window in a critical game situation, with the middle of that scale being 0-graded, or ‘expected’ plays that are neither positive nor negative. Each game is also graded by a second PFF analyst independent of the first, and those grades are compared by a third, Senior Analyst, who rules on any differences between the two. These grades are verified by the Pro Coach Network, a group of former and current NFL coaches with over 700 combined years of NFL coaching experience, to get them as accurate as they can be. From there, the grades are normalized to better account for game situation; this ranges from where a player lined up to the dropback depth of the quarterback or the length of time he had the ball in his hand and everything in between. They are finally converted to a 0-100 scale and appear in our Player Grades tool. Season-level grades aren’t simply an average of every game-grade a player compiles over a season, but rather factor in the duration at which a player performed at that level. Achieving a grade of 90.0 in a game once is impressive, doing it 16 times in a row is more impressive.

No player evaluation system is perfect; in the sport of football, quantifiable evaluations are extremely difficult. PFF follows a methodology that may be flawed, but can also provide a starting point for player comparisons. Players are graded by the same criteria which enables them to be given a rating for their specific position. Based on free, public data available at Pro Football Focus here are the 2019 final ratings for the Washington Redskins key defensive contributors and ST specialists.

The Redskins top five defensive players, as ranked by PFF were:

It might be argued that Apke and SDH landed in the top-5 largely by virtue of small sample sizes, playing 19% and 34% of the defensive snaps, respectively.

Inarguably, Quinton Dunbar and Matt Ioannidis were among the very best Redskins defenders in 2019, belying their draft positions (or in the case of Dunbar, lack of draft position). Dunbar, an undrafted player who switched positions from wide receiver to cornerback in his rookie training camp, is not only the highest rated defensive player on the team, he led all Redskins in PFF rating in 2019.

Meanwhile, Josh Norman’s 45.6 grade is a sign of how far he has fallen. Norman looked slow, overmatched, and, at times, disinterested on the field in 2019. It’s hard to imagine him playing for the Redskins in 2020, even with a reunion with Ron Rivera in the offing. The main question is whether the Redskins can find a way to recoup any draft capital by trading him after his horrible on-field performance.

More surprising to me was the team-low grade of 28.9 for Deshazor Everett, who played only 38 defensive snaps this season. Although I thought Jeremy Reaves outplayed Everett in the pre-season, over the years, I’ve formed an impression of Everett as a solid backup at safety, but I know that with a smaller the sample size (number of snaps) there is a greater chance of having a few plays skew the results, either very high or very low.

Looking at the grades for the specialists, the only real surprise for me is that the grades for Hopkins and (especially) Way aren’t higher. It felt like Hopkins was one of the better kickers in the league, but PFF had him graded 27th out of 42 graded kickers. while Tress Way was graded 7th among the punters.