Review of the regional pairings–my first go ’round with projections.

Things I didn’t see coming with my projections: was harder than I thought, I was wrong more than I thought, though the permutations of the rankings may be lengthy they are still finite–which means that there can be some educated guessing amongst this lottery, and overall I really like this process.

As a baseline test, PerfectGame.org who specializes in all things College Baseball posted their final projections (below,left) before the selections were made this morning.  I’ve highlighted in GREEN the picks that were spot on.  More than selecting the host sites, I highlighted the top seed in each region only if they either selected the correct seed of the host OR selected the pairing correctly.  As the graphic illustrates, PerfectGame.org did a tremendous job of picking not only the top eight seeds in order but also many of the regional participants–something to brag about, really.  Contrast that with my first go ’round (below,right) and you see some major differences.

perfectgameprojections2014myprojections2014vactual

Oh well.  No beginner’s luck here but lessons learned.  Moving on.

The next motion about my selections versus the actual selections I would like to raise is mileage.  While the actual vs. projections averages on mileage to host cities did not vary much (Actual =572 miles traveled to regional sites vs. Projection = 583 miles traveled to regional sites) the amount of teams who had to travel over 1500 miles did see an increase when it came to the actual pairings.  Which, of course, I consider to be significant.  Most of the time your average miles traveled will return center based on in-state teams being more prevalent than cross country travelers and thus not show a large variance–as we see here.  But the amount of teams that you can spare from traveling a great distance (here, thought to be anything over 1500 miles) the better job you should have done in your pairings.  In my projections, I had only four teams traveling over 1500 miles (Arizona State, Binghamton, UC Irvine and Stanford).  However, the NCAA committee had five teams traveling over 1500 miles (Long Beach State, Oregon, Stanford, San Diego State and Washington) and two of those teams (Oregon and Long Beach State) are actually traveling more than 2,000 miles. It occurs to me this could have been avoided.  But, as mentioned before, I’m simply a novice with next to zero understanding of what the committee does behind closed doors.  I’m still figuring out how to do it without any doors at all.

One final note I’ll make about the pairings made today: Along with distance considerations, it occurs to me that the most important thing to consider when making these pairings is to reward good seasons (teams) better than you reward relatively lesser seasons (teams).  That should be evident when you look at the overall difficulty of your pairings from your top regional (your number 1 seed, in this case, Oregon State) evenly flowing upward toward your lowest ranked regional (your effective #16 seed, in this case, the team that is paired Super-Regional-wise with the top regional, Oklahoma State).  Taking into consideration ONLY the RPI’s involved with each regional it appears that descending order parity was not the topic of the day with the NCAA selection committee.  It actually works out, most questionably, that the lowest regional host (again, Oklahoma State) turns out to have the easiest regional, as shown in the Actual Regional Difficulty graphic below.  Then, even more surprisingly, the next best regional host after OSU is given the hardest overall regional.  The top seed and top regional host only receives the third easiest pairings.  Huh?  I felt like in my rankings, though it was not plainly linear, it was more closely related to what one would want out of a reward system.  At any rate, if we can surmise that the committee didn’t do their best when it came to distance as noted above and ALSO didn’t do their best when it came to rewarding better seasons with lesser competition then, really, what was their basis?

Prediction v Actual Difficulty
[Difficulty levels based on an average of the 2/3 seed’s RPI, as 4 seeds tend to add unfair variance. Seed of the host itself was also not factored in as this may also be an unfairly swaying factor. A higher seed would simply be improving their difficulty based on “being themselves”.]
 P.S. I don’t know enough about West Virginia or Mercer’s baseball season compared to Cal State Fullerton and Clemson–other than both of those latter schools having richer baseball traditions than the former two (28 CWS appearances and 4 NCAA titles between the two)–but solely looking at RPI’s, WVU (38 RPI compared to 49 for Clemson) and Mercer (46 compared to 54 for Fullerton) sure did get held back for some reason.  Yes, the Mountaineers have a terrible top 50 record, winning less than 25% of their games versus the best teams in the country, but I believed that those figures were already considered when compiling an RPI.  Right?  Mercer won every single game they played versus a top 50 team(only played 3 games vs. top 50 opponents), but that didn’t help their cause.  So what happened?  Oh well.  No time to deliberate any longer on that.  Now? On to the Baton Rouge Regional teams…..coming soon!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s