With IGN firing one of their writers after an investigation regarding their Dead Cells Review confirmed plagiarism, the gaming community once again felt betrayed by professional game critics. With every review, game critics seem to be taken less seriously by their readers. In this opinion piece, I will explain what I consider to be the main flaws of Video Game Criticism as of today, why they are present, and what my solutions for the issues would be.

Image Credit: IGN.

Introduction

Game critics, people love to hate them. Not without reason, as many game critics make errors in judgment when attempting to entertain readers as well as outing their opinion simultaneously.

I am not trying to defend game critics at all in this article. On the contrary, I’ll be going in depth about the problems many review readers encounter, why they occur, and what a better alternative would be for the many flaws game critics show off in their reviews.

Before I can get to the flaws of game critics, however, it is important to look at what a game critic’s role is and what the differences between present game critics are.

A game critic is essentially a writer specialized in video gaming and writing reviews (basically a recap of the game combined with a personal opinion and score). The reviews are in essence as simple as can be: the writer experiences a game first-hand and continues to write about its experiences, giving a game a certain score. These scores are in most cases given before the release of a game, meaning game producers can use these to promote their game if positive. There are also websites that combine the scores of major video game magazines (e.g. Metacritic), which makes the general critic consensus on a game known worldwide, increasing or decreasing the hype around an upcoming title.

A complication in this can be found in the way magazines give scores. Metacritic uses a point system, with 100 (a 10.0) being the maximum score, and a 0 (0.0) being the minimum. In this system, even critics that tend to use decimals can get their precise score mixed into the average critic score. However, some critics don’t use a something out of 10 or 100 score. There are critics that rate out of 5. Some critics don’t even give a numerical score, but instead simply give a type of recommendation.

The problem with the differences in scoring is that there is no real connection between scores. Is 2.5/5 the same score as a 5/10? Does ‘highly recommended’ mean a game deserves an 8/10 or higher? This can even confuse a critic since it’s hard to tell what number even means ‘good’ or ‘average’. But it’s even more complicated than that.

Image Credit: GameXplain.

The Flaws Of Video Game Journalism Companies

There are different approaches for writers to come up with a score when writing their review. This confuses the reader but isn’t necessarily a bad thing. If every company would simply stick to its own, original way of doing things, there would not be a Metacritic. Every reader could just pick a couple reviews sites to put their trust in and witness something unique for every single video game magazine. But the market demands a general consensus; it is the only professional way to distinguish games quality-wise. This should have also been very beneficial to loyal followers of game scores; they are informed directly and can make an objective choice for a game to play.

But the problem lies in this collection of reviews. Being the 30th critic that rates a game 80/100 is not as spectacular as rating it 50/100 and getting more traffic on an individual website. Or, if a game is indeed rated 80/100, why not appreciate it even more to appeal to the game’s loyal fanbase and give it a 95/100?

The incentive to create a controversial review is apparent but has its risks. Will the writer risk getting eviscerated in the comment section, for example?

This brings me directly to the plagiarized Dead Cells Review by IGN. The score given was a 9.7/10, which means the game is just a little shy of an absolute masterpiece. The writer of the review appears to have copied the entire content for its review from a lesser known YouTube channel. The only reason to take such a risk in my mind would be out of laziness, or because the writer was short on time. Either way, this would prove that giving higher scores takes less effort (since it requires less actual criticism) and since higher scores are appreciated by more fans of a game, such a combination can prove to be the ideal way to produce low-effort, high-appreciation reviews. That being said, there are instances where deconstructing the hype of a game with a lower score than people anticipated can work positively as well, with readers wondering why the game did not perform as expected. This also works the other way around. When games have no high expectations and perform well, readers will also be curious as to why.

Such a method of producing reviews could have short-term success, but with the growing complaints against video game critics, maybe a more honest and critical position would be preferable at the moment. It almost feels as if we’re at the point where every game critic is anticipating the best game of all time, and can’t stop putting out high scores.

Image Credit: Easy Allies.

Another issue that directly involves both reviews and scores is the management of writers and editors. There is a clear contrast between an objective and subjective score. Since there is always a single writer for a review, a single score could be very far off the general consensus, meaning it could become a controversial review. It’s possible that companies manage review scores to fit with the pre-existing hype surrounding a game. However, this is just pure conjecture as I have no evidence to support this theory. Furthermore, it is hard to tell which writer on a website with multiple writers will review a game. This means that the reader will always be at a disadvantage, not knowing who’s point of view a review is coming from. Although game journalism outlets publish the author’s name, I consider it hard to trust a certain writer enough because of the lack of spotlight and depth when an outlet has lots of writers writing all kinds of games,

The allure of Review Copies

Another issue is the existence of review codes. Now obviously, every critic loves the opportunity to play a game before others get to and without any costs. Major game journalism companies rely on these early review codes. This means, however, that criticizing a game too harshly may lead to a broken bond between game developer and game magazine. If an outlet can’t get a review code, the review won’t be done until after the initial release of the game, which means that the journalists of the magazine have no opportunity to impact the critic score before the game’s release. To avoid this risk in the first place, there might be limitations on how harshly a game can be reviewed. It does help if multiple other critics share the negative opinion on certain games, but in most cases, it’s still a matter of better safe than sorry.

However, what if journalism outlets took the risk by reviewing a game in an honest way and would lose their privilege to receive review copies by a certain developer? If the outlet would publish an article on it which blew up, the developer could expect even more negative publicity. So I sincerely doubt game developers would actually do this. For now, the only critical people being denied review copies are Youtubers, who are mostly on their own.

To resolve the issue of cautiously written reviews, I believe it best to universally verify which gaming journalism outlets can get review copies for games. To make this verification neutral and impactful, why not let Metacritic determine the verified outlets for example? This verification should be indefinite, with possibilities to join the list for newer outlets, just like the way Metacritic works as a website. Only in the case of a huge scandal or a consistent rate of poorly reviewed games can an outlet be banned from it. This should give review magazines the confidence to write actual honest reviews with fitting scores for all sorts of games.

Image Credit: videogamedunkey.

What Reviews Should Be Like

The Value of a Video Game Review

I don’t consider myself in any position to tell others how video game reviews should be written, but I have formed an opinion on the matter by simply thinking about what people should hope to see when they are looking up a review score. I believe that there are several reasons to look for review scores:

1. Looking for a new game to play: what’s recommended?

2. In anticipation of a new game: is it as good as I hope/expect it to be?

3. In retrospect: wow, I really enjoyed this game; what does it score?

Because review codes are handed out before the release of the game, with the first reviews being online right before the worldwide release, the essence of a video game review is its recommendation. Sure, it is great to be comparing the results with the expectations and to put the game on a timeline, but primarily, these reviews are scores that tell the reader just exactly how good a game is. But there are different opinions on what can be considered a good video game.

Image Credit: Team Salvato.

The Basics of a Video Game Review

Most review scores consist of comparing the positive aspects to the negative aspects. If a game has more positive than negative aspects, it must be a good game. Now, obviously, there is so much wrong with this.

The main issue for a game critic in its reviews is to separate subjective opinions from objective observations. Most critics make their reviews a weak sum of positive and negative experiences with a score at the end that summarizes these positive and negative points in an accurate way. For many writers, this is the go-to method to rate a game. It’s easy to summarize since the positives and negatives reflect the score; no aspect needs to bear extra weight. What this means is that a game with no particularly bad aspects could get a 10/10. But what if the game lacks ingenuity? Is a game that is well built and fun to play through for tons of hours an experience worthy of a perfect score? Obviously not. To separate the average from good and beyond, originality and unique characteristics are key to video game magic.

There are also different opinions on what it is that makes a video game a video game. Is a game with only dialogue choices worthy of being called a video game? Are there standards that every video game should have as a bare minimum? Questions like these should get a critic thinking while approaching a new experience. In my mind, that’s what it’s all about. A video game should be considered an interactive video experience in which the player can influence the course of events, no matter how minimal. This is what separates a video game from a movie or a board game.

Image Credit: Ubisoft.

Once a critic knows the basics of the existence of video games, they have the authority to review something. Therefore, the quality of a review is not determined by the critic’s taste or experience, but merely by their ability to be both subjective and objective in a review and being capable of utilizing relevant arguments for its statements.

While the ability to strengthen your opinion on a game with solid arguments is the key to any opinion piece, especially reviews, the scoring is what shapes a review the most. This is because:

1. The score is the shortest and most noteworthy summary of a critic’s opinion on something.

2. The score puts the game in perspective; how good is this game compared to that game?

3. How positive or negative is a critic on this game compared to other critics?

Sadly, most reviews serve as a build-up to the score at the end. When I would be looking at a new game’s Metacritic score, I would also look at where the scores came from. An interesting looking review might be worth a look, but reading an entire review for every major release is just not something that appeals to a big audience. The score is easy to understand and memorable.

And that’s why it is important to point out just how inaccurate most video game scores are. Obviously, differences in approach cause perspective problems when comparing reviews. A ‘liked-a-lot’ is hard to compare to an 80/100, after all. Above all, it is essential that scores given by reviewers fit the reader’s value of that score. And that’s why the current scores don’t make any sense to readers.

The Price of Enjoyment

Before a critic could tell what a score stands for, they should carefully take into account what actually should impact a score. The price of a game upon release is something I barely see critics use when determining their game score. Apart from being a cost, the price of a game tells its potential buyers just how big of a game they are making, with $60-$70 being an AAA-title, a game that has been in development for a bunch of years with a big budget. A $15 game, on the other hand, is rarely a major video game from a major company. However, indie projects such as Hollow Knight and Stardew Valley do live up to what could have been a major game title. In that case, a critic should view a game by what it accomplishes and what it set out to be. For example, Stardew Valley is a project by a single person. The game is absolutely loved by many gamers, is full of original gimmicks and has a unique feel to it. If so, the game exceeds its own expectations and deserves a higher score than it would have gotten if the price was not included in the review. On the other hand, there are annual franchises with minor upgrades and reoccuring bugs. Surely, they still have loads of gameplay components, but they simply don’t fit the full price tag.

Image Credit: Polygon.

Another way to use the price in a review is by counting the $ per hour. For example, as of now, I have played 150 hours of The Legend of Zelda: Breath of the Wild. The game cost $70 when I bought it. This means I have paid less than $0,50 for every hour played, which gives it great value. The problem in this though lies in less gameplay-focused games. For example, God of War (also an excellent game), only took me 60 hours to 100% with the same price tag. Most adventure games that rely heavily on dialogue have fewer hours. This makes the method slightly unfavorable since games have different focuses; game time is not everything. God of War offers higher quality for its reduced time than most games with longer runs, for example.

If a critic were to review with eyes on the price tag, there would still be a difference between games. An 8/10 game of $15 should be less enjoyable than an 8/10 of $60. Otherwise, the game of $15 would surpass the score of the $60 game. When looking at the greatest games of all time, this is also of importance. Price doesn’t have a role in determining the greatest games of all time, so it doesn’t impact a game’s afterlife.

Original Quality over Repeated Quantity

Once the price tag is taken into consideration, the main objective for a critic should be the rating of the critic’s experience. I see reviewers talking about how much fun they had, and while I believe enjoyment is a positive quality for a game to have, not all games have to be fun to be good. If the gaming community considers video gaming to be an art, then it should treat its games like they can or cannot be art. Art involves all kinds of emotions, not sheer joy or realism. So when rating a gaming experience, priority number one should be to look at what a game did to you that other games did not. Take a game like Breath of the Wild, where the minimal sound effects, the beautiful art style, incredibly accurate physics and dynamic environments suck you into the game like never before seen. I felt very alive when playing such a masterful game, it truly was art to me. Then there are games like Hollow Knight, which sucked me into a dark world full of struggles and decay. Once again, I was blown away by the originality a game could have. That is how you separate art from games that simply aim for the fun factor and are thus less ambitious in general. Annual franchises and most free-to-play games rely on fans to come back, so they give them what they already know with slight upgrades. Not to be called an innovation at all, yet still positively received by critics because of their already proven qualities.

Image Credit: Call of Duty.

This directly corresponds to the rating a video game gets. A rating should primarily be based on the experience a critic has with the game, and how it stands out compared to other similar titles. Obviously, the game’s mechanics and technical properties are to be taken into consideration as well, since even the greatest game cannot be experienced if technical issues hinder it from being played, but this is only to be expected. A malfunctioning game is an unfinished product and can’t be called a complete game in the first place.

Subjective Versus Objective Reviews

I hope this explains my main critique regarding ratings. I’ll follow up with some examples to support my argument.

I see most critics mindlessly giving a game an 8+/10 score because it is fun and its mechanics work, while multiple games with similar mechanics, story writing, and technical achievements have been released in the same year. This confuses the mainstream gamer and makes them believe that a game is good because it functions and is up to today’s standards. For example, let’s take a look at the FIFA franchise. For football fans, FIFA is a great way to kill time with friends and has competitive modes to further develop the player’s interest in the game. However, every new FIFA game is a slight upgrade on the last. EA knows that the interest in FIFA remains, so bringing a new low-effort version every year is very profitable and a logical option. If you would look at the review scores for the recent FIFA, you would see no bad overall scores. This, while every single version is completely redundant for players that are not diehard fans of the franchise. This also shows that originality isn’t exactly an important quality for a game to have, as FIFA’s Metacritic overall scores are similar to those of original indie games, even if they have only few flaws.

Image Credit: GameSpot.

A perfect example of critics reviewing the size of a game rather than the unique experience a game brings. In this case, critics review games too objectively, only reviewing the content a game has and not the feeling they get from it. On the other end, there is being too subjective. I have read reviews of video games from popular franchises that are clearly written by fans of the franchise. Normally, this should not be a problem, but if the reasoning for a 9.5/10 is ”it’s ideal for fans of the franchise”, then there is a clear lack of objectivity in a review.

A subjective review does not require a lack of objectivity to be controversial. When YouTuber Jim Sterling gave Hellblade a 1/10, he did so because a game-breaking glitch ruined his personal experience. In his mind, his score was justified because he couldn’t play the game, but it worked fine for everyone else. This resulted in him getting hated for his overly negative review of the game. When he reviewed Breath of the Wild with a score of 7/10, whereas the general consensus was between 9.5 and 10/10. Once again, he received hateful comments for being honest with himself in his review. A 7/10 is by no means a bad score objectively speaking, yet he still got bashed for his opinion. This just goes to show how vulnerable an honest critic is to the general consensus.

Image Credit: Nintendo.

That being said, I can understand the difficulty in balancing between subjective experience and objective universal content reviewing. First of all, objective opinions are usually shared among reviewers. When a critic encounters a straightforward bug in a game, for example, it’s likely that other critics will encounter the same bug and report about it similarly. Second of all, even games with comparable scores might not be remembered as good compared to the other as the scores suggest after a certain time. Some games are just more successful in leaving their mark than others. Third of all, no reviewer is perfect, nor is any review. One of the reasons for me to write such an article is actually me doubting my own scoring. I gave Dragon Quest Builders an 8/10 because when I played it, I thought the game was polished, enjoyable and gave me a sense of a sandbox while still being an RPG. When I look back on it, I can say that I have been struggling to pick the game up, since it just doesn’t stand out as much. Still, in the current way of rating games, I didn’t make an error in judgment by giving it an 8/10. However, would I recommend a $45 game that does not stand out at all? Of course not.

Fair Scoring

When rating a game, a critic should also keep game history in mind. When reviewing a game, they should always consider the game a new product. Today’s standards are different from those of the seventh console generation, so when a game has great graphics on a PS4 Pro, it shouldn’t be considered a huge plus just because it is a AAA game and lives up to today’s standards. Only when it pushes the edge of graphical capabilities (like Uncharted 4), it can impact the critic’s score drastically.

A useful tool for reviewing a product partly on its technical prowess can be to put the game in perspective. Is a game as inventive in RPG systems as Final Fantasy was when it first released? Is a game that utilizes cinematic moments as polished as The Last Of Us in 2013? Questions like these can be helpful to determine just how much a game pushes the edge in its respective genre.

Image Credit: Playstation.

When major critics review games, the score is hardly lower than a 7.5. Even a franchise as repetitive as Call of Duty will always get away with an 8/10. The funny thing is, when a AAA game receives an 80/100 on Metacritic on average, I will know it’s not worth the purchase, whereas an indie game with a 70/100 can be awesome for the lower price. The reason for this is simple: indie games have more exploitable weaknesses since they are made on a smaller budget. Furthermore, rating an indie game harshly is not as risky as rating an annual franchise critically. But for the reader, this can only cause confusion. When a gamer can get 4 original indie game experiences with some apparent flaws for the same price as a single AAA game without any new ambitions, the critic consensus will most likely tell the gamer to go for the AAA game, while every single indie game’s experience will be much more worthwhile and memorable.

There should be a new consensus regarding scores, where 5/10 is ‘average’ and 7/10 is ‘good’, as right now a 7 is practically the lowest score a AAA game can get at the moment, while annual franchises receive 8’s, games that fail to live up to a certain hype score 7’s and only scandalous games like Star Wars: Battlefront II end up with a 6.

Only when critics can rate video games based on their value as original art, when they are free to give scores from 0-10; only then the mainstream gamer can be well-informed on new titles and collect only the most essential games to play. It would work out for game journalism outlets just as well since they would finally be taken seriously again.

Image Credit: EA Star Wars.

A Cure for Today’s Problem

Currently, the status quo looks to be far from being resolved. This is mainly because a single critic can not impact its competitors directly to shift to a more critical way of reviewing. The risk is also far too great. Changing in favor of fair reviews might win the hearts of some readers, but might not convince game developers, who won’t risk worse scores.

I can advise readers to always criticize reviews when reading. It’s also important to actually read a review instead of just the score. Comparing multiple sources (at least 3) is also recommended since the objective criticism can be compared and the subjective criticism can be balanced.

For a major change, we would have to hope for a stronger incentive to publish honest reviews and a more protected status for more popular game journalism outlets, so that they can review a game without having to hold back.

Conclusion

I have only scratched the surface of the issues people encounter when looking up reviews of games. There are tons of additional flaws worth analyzing, but they might get in the way of the point I am trying to make in this article.

To sum it up (TL;DR):

1. Critics should be aware of the fact that honest reviews with accurate criticism appeal to more people and make a journalism outlet more authentic on a topic.

2. Critics should be free of review copy stress, they should be able to be as harsh as they need to be on AAA titles.

3. Reviews are primarily an indicator of how recommended a game is.

4. The price tag of a game is a useful tool to compare the ambition and confidence of a game’s developer with the execution of its ideas.

5. Video games are to be rated as if they can be art; that’s how gamers can separate mindless fun games from true works of art.

6 Video games are to be rated both subjectively and objectively. Subjectively, because a video game is an experience experienced by the reviewer. Objectively, because a video game should always be put in perspective when reviewed.

7. When determining how good a game really is, putting it in its time’s perspective is essential.

8. Since Metacritic uses scores from 0-100, critics should utilize all 101 numbers if necessary.

9. In the current state of video game reviews, it’s recommended to use multiple sources and their contexts when determining how good a game actually is.