ZIPS .51

JAMES .50

GREY .48

FANGRAPHS .47

ROTOCHAMPS .43

MARCEL .36

Averages for ALL COMMON PROJECTED PLAYERS provided as well.

The spreadsheet is available via Google at:

https://docs.google.com/spreadsheet/ccc?key=0AqBBhSBkTzrUdDZYV1IwUFB0aVM4Rk0ybTlSRDkzVlE#gid=0

Mitch, you hit it out of the park! Thank you!!!

]]>“Accuracy(my attempt): on average the darts hit the bulleye. Maybe none of the darts land on the bullseye, but on average they do.

“Precision: when you want to throw higher, the dart generally goes higher.

For forecasts, precision is the key.”

After a little give and take between us, he moved to the following in place of his correlation coefficients above. This was done primarily to achieve an overall, reliable, value for each pert.

Mitch calculated r-squared values and averaged them for a total rating for the precision of each pert. Further, he provided my original spreadsheet with his work at the bottom of the summary page. This is the place you want to look for the measure of the precision of the perts’ projections.

Avg of R-sq (over 525 AB):

ZIPS

]]>Correlation Coefficients for 177 players

R HR RBI SB AVG pts

Fangraphs 0.48 0.67 0.56 0.82 0.5 0.58

Rotochamps 0.48 0.64 0.56 0.82 0.51 0.57

James 0.54 0.67 0.57 0.79 0.45 0.58

Marcel 0.52 0.64 0.55 0.79 0.46 0.55

ZIPS 0.45 0.65 0.52 0.81 0.54 0.54

Grey 0.52 0.63 0.54 0.78 0.45 0.56

Correlation Coefficients (525 AB) for 69 players

R HR RBI SB AVG pts

Fangraphs 0.47 0.79 0.66 0.87 0.53 0.66

Rotochamps 0.34 0.76 0.67 0.86 0.52 0.60

James 0.63 0.79 0.69 0.86 0.52 0.66

Marcel 0.31 0.71 0.58 0.84 0.43 0.43

ZIPS 0.56 0.80 0.70 0.88 0.60 0.65

Grey 0.51 0.79 0.73 0.85 0.49 0.67

The idea is that we all nominate two players to start. You have $260 to spend total and each time a players price goes up, their clock gets 6 hours added back on. I’ll probably start it on Monday. If we don’t have enough people then bots will fill in the gaps.

Here’s the link: http://www.couchmanagers.com/auctions/?auction_id=1

]]>@sean: You nailed it!

@tggq21: we’ll see how the available time presents itself. Thank you!

]]>Just wanted to thank you and Grey for War Room once again. Very accurate and led to a championship! Hope you can continue with it!

]]>Grey has an exceptional talent of evaluating a player’s worth compared to the general market perception, which in the game that we play, can be even more valuable than projecting year-end stats. The difference between an average projection on a player and an elite one usually isn’t as valuable as being able to navigate through draft day and the player pool with a level of sophistication.

]]>Tango Tiger runs a forecasting challenge every year. If you seach forecasting on the blog linked you’ll get a flood of information that might be interesting to you.

I will say I would not be surprised if Grey did in fact beat all these forecasts. If you look at the data sets for the mechanized forecasts you can always find some obvious outliers that for whatever reason find flaws in the algorithms and produce some really funky projections. I remember PECOTA had Jake Fox hitting 30 home runs for the A’s a few years back for example. A human is always able to include more fators in an evaluation than the limited inputs available to a forecasting system.

Another thing that throws your analysis off is that projection systems incorpate past injuries into their playing time projections. Take Ian Kinsler’s 2011 for instance: Bill James projects him for 609 PAs, Marcel 494, ZiPS 568. The actual 2011 total is 723.

Grey on the other hand, can project a player out for the full season. Because injuries really cannot be predicted at all, Grey gets a serious advantage in getting to predict a full season worth of production for most players. Even if he tinkers with a few projections for “injury prone” players, the systems are reducing playing time for non-injury prone players who happened to have injuries in prior years.

]]>I also believe that if you’re a smart fantasy baseballer, you’ll use this site, and check many others. The other perts you listed are good sources, better to be well researched than just go with one source. Also as long as you stay away from ESPN, you’ll do alright lol

]]>If one used Marcel (who predicted lower overall, perhaps taking into account injuries?), he would be applying projections that fell 12% short of end-of-year actual production for ‘healthy’ players (those with 525+ AB).

]]>There’s lots of knowlegable people on here as well. If you’re new, if not my bad….

]]>The options to attempt a valid comparison are innumerable. My goal was to try to estimate which pert’s projections would be most beneficial to use in projecting end of year projection for my draft. I, personally, would rather use projections as if players will be healthy, rather than deal with unpredictable injuries.

@DrEasy: I made an initial attempt last year using a similar strategy that you propose. I ranked each pert’s projection for each stat relative to the final stat. I only got so far as 1B (it was a lot of work). The pert’s were different, but Grey ranked 2nd (I believe; I didn’t save those stats). I will try to repeat again annually. Again, anyone welcome to the spreadsheet. One sheet shows an example of comparing stat-by-stat.

@Big Mike: Sorry, Mike. I am a streamer. I find the pitching too unpredictable and not on my agenda.

@Jeff: My initial cut was at 500 AB. Much more reasonable results than the overall. Then moved up to 525. Felt that level even more solid. POSition is id’d in the spreadsheet. I sorted such. One of two extremely high/low player performance can skew those maybe 5%. For example, 2nd base had an overall ratio of .92 with Grey at .95. The first thought would be for pert’s to up their projections. However, looking at the data, Kinsler had ratios from .68 to .84 (all under-projected him). So, again, the overall ratio of all positions helps to reduce the skewing. I did calc median and mean in the spreadsheet.

@Tony: I made the formula up myself. I played a points league for years and the formula comes close. I made a few adjustments to project closer to real outcomes. For example, HRs usually get 4 pts. 3.5 is closer to RCL outcomes. The 290 factor for AVG is my best attempt to measure it at a level relative to the other stats.

@Red Sox Talk: I have provided the raw data. I don’t claim to be the end-all evaluator. Encourage you to analyze as you see fit. I had done the work for my own projections. Thought others might be interested in the results.

@Matt L:

Personally, I am kinda tapped out regarding the time I am willing to give to further analysis. The results have me at a spot where I am happy to just run with Grey’s projections. I have annually spent a lot of time crunching projections. I don’t think I can get better than an overall 1.00 to end of year production.

BTW: I will calculate ‘injuries’ in some measure in applying it to newcomers. The average of the top 12 by position for ABs last year were:

C-474

1B-571

2B-586

3B-500

OF-535 (top60)

SS-581

I would guess the 500 for 3B reflects more injuries at that position. So, I will project a newbie SS at a top of 581 AB. For 3B, they get the 500 AB.

]]>@Fred: nice work: did you just make that formula up or borrow it from somewhere? Maybe i missed that part. Interesting article. I dont know how conclusive it is though. It looks like all the “perts” were pretty good in their calls?

]]>