SkyKing162's Baseblog

A fan of the Yankees, Red Sox, and large sample sizes.

First of all, I'd like to correct myself. I made a big error in my last post, at least theoretically, and I'd like to correct it. After determining the difference between "typical" league-best and league-worst pitching and fielding and making the warning that these number didn't necessarily correlate to how credit should be divided, I went ahead and assumed that defense and offense deserve to split credit 50/50 and thus concluded how much difference all three phases of the game there were between league-worst and league-best. Now, that 50/50 split is probably accurate for dividing up credit, but since that's not what I was doing, it's theoretically wrong. What I should have done (and will do now) and find the difference between "typical" league-worst and league-best hitting, as measured by runs scored. Again, I'll take the range from the third worst to third best runs scored totals. And yes, this is chocked full of park affects, unbalanced schedules, and you-don't-have-to-hit-against-your-own-pitchers issues. I'm fine with a rough estimate, though.

Last year's range: 856 (CHW) - 641 (PIT) = 215 runs. So here's our chart:

Offense: 1.3 runs/game
Pitching: 1.1 runs/game
Fielding: .4 runs/game

That's interesting - defense combined gives a range basically equal to the offense range - which show a pretty even split between offense and defense. But similar to the whole gravitational mass doesn't HAVE to equal generic kinematic mass, even if offense is 50% of the game and defense is 50% of the game, these numbers didn't have to come out like they did. And because they came out this way, it doesn't mean hitting is 50% of the game. (And heck, the error bars on these calculations are huge.)

Comments: Post a Comment