ANNOUNCEMENT – OrionRank Top 100 2017

Following up on last year’s ranking, a complete Top 100 sequel has been designed and finished.

Release Schedule

January 15th, Monday-January 19th, Friday – 100-51, 10 per day.

January 22nd, Monday-January 26th, Friday – 50-1, 10 per day.

This is to avoid conflicts with the PGR’s release schedule. More info…

INFORMATION – PURPOSE & IDEA

The concept was to do something mixing MIOM’s panel based ranking and PGR’s stat based ranking by having a statistics based Top 100 for Smash 4, an idea that many responded to positively. Taking a cue from MIOM, and something that is very important:

ALL OF 2017 IS USED IN THE RANKING, NOT JUST A PGR SEASON’S WORTH (GENESIS SAGA – UMEBURA TAT)

*also some very late 2016 tournies in Japan from the last week of December that didn’t get used in the 2016 Top 100, similar to how Umebura TAT is used for the PGRv5 rather than PGRv4.

So there will be discrepancies with the newest PGR, as this is not intended to be a competitor or rival, hence why the publishing dates will be after the PGR rather than side-by-side. They are the accepted official ranking. OrionRank is an independent community project that can be used to argue in favor of some players not recognized by the PGR due to them just narrowly missing the 50 spot.

SHORT FORM METHODOLOGY

Last year’s methodology was very strung together and basic. Those that have noted the two mid-year rankings may have already seen improvements made, but in case you didn’t, here’s a basic rundown:

IMPROVEMENTS

Efforts have been made to reduce “point farming.” This issue resulting in a numerous of players – predominantly SoCal – swamping sections of the rankings, for a few reasons. K9 and Tyrant were primary benefactors due to too many weeklies being used. This was rectified almost immediately in following mid-year iterations.

Weekly usage is extremely scarce and is entirely disregarded at events where sandbagging is in high prominence/use. Comparatively, many weeklies were used in the 2016 iteration. This was a massive mistake.

Efforts to make player value have improved by separating placements between different tournament values. This will mean outlier scores tank a player’s score less. This is still flawed, as certain players who qualified but have low major attendance will get high placement averages resulting in high point values, but this results in outlier examples mostly comprised in the Coastal Southeast, certain segments of Japan, and Europe.

Points earned based on sets vary between the categorization of the tournament. Sets at a Category 1 event (e.g small regional/local) will be worth significantly less than that of a set win taking place at a Category 6 event (e.g. GENESIS, EVO, Civil War.)

PROCESS

A player’s average placement scores is taken, giving them a placement value. They are then placed in a graded section, determining how many points they are worth if a player defeats them.

This is applied to the 350+ qualified players, who qualified by placing high enough at designation major tier events.

Afterwards, set records are extensively combed through, and players earn points based on set wins, with adjustments made depending on the skill pool of the tournament (i.e the categorization) and comparisons in attendance, meaning players with low attendance get a boost to compensate.

Certain bonuses are awarded. Players get points for winning events, for example.

Full details from prior methodology pages will be updated, but this has thus far shown itself to be relatively reliable, as we’ve had a year to work out the kinks. Some work can be done for placement average outliers, but this model should be less error-ridden than 2016’s was, with no examples I can think of where executive decisions to modify %s of scores/points had to be made to account for inherent problems in the model.

As a result of using the entire year, some conventional placements that are accepted in the short term of the last 3-4 months may not necessarily apply. As a forward note and example, Ally is likely to drop a lot on the PGRv4 due to a struggling late 2017, but his solid first part of the year will be reflected on OrionRank. This won’t be a significant outlier, but it’s fair to say he’s 1-2 spots higher than you might expect considering his recent outings.

QUALIFYING TOURNAMENTS

Category 6 – Supermajor (Top 96 Qualify)

GENESIS 4

2GGC: Civil War

EVO 2017

Category 5 – High Tier Major (Top 64 Qualify)

2GGC: Nairo Saga

CEO 2017

Super Smash Con 2017

2GGC: MKLeo Saga

Category 5- – Mid-Tier Major (Top 48 Qualify)

Shine 2017

GameTyrant Expo 2017

2GGC: Fire Emblem Saga

2GG Championship* (All players who attended already qualified, making this Category 5 primarily for scoring purposes.)

Category 4 – Low Tier Major (Top 32 Qualify)

2GGC: GENESIS Saga

2GGC: Midwest Mayhem Saga

Frostbite 2017

Frame Perfect Series 2

CEO Dreamland

Umebura Japan Major 2017

2GGC: Greninja Saga

Momocon 2017

2GGC: ARMS Saga

DreamHack Atlanta

2GGC: SCR Saga

2GGC: West Side Saga

Category 4- – Special Case (Top 16 Qualify)

NicoNico Tokaigi 2017

B.E.A.S.T. 7

Syndicate 2017

Umebura T.A.T.

#101 – REGI SHIKIMI

Notably, freezie has improved the card designs. This is one of the darker ones that doesn’t quite highlight all the improvements color-wise, but the color scheming matches the characters used (hence a G&W card being darker), retains the space-based background, and has a series of less sporadically placed text cards that list key tournaments throughout the year, key placements, notable personalized player records, and stats vs Top 20/100, as well as the basic score, region, and character information.

For this example, Regi Shikimi, the best Game & Watch player and one Mexico’s titans, rolls in just outside of the Top 100 with a noteworthy performance at Nairo Saga where he defeated Nicko, aperture, & Mistake in bracket before being eliminated by ScAtt at 13th.