Jump to content
You need to play a total of 5 battles to post in this section.
Spartias

Quality Control - Stalingrad (Now with EU data too)

28 comments in this topic

Recommended Posts

468
[OO7]
Members
1,403 posts
8,212 battles

I've spent a lot of time recently reading a lot of continual threads arguing over if the Stalingrad is overpowered or not. Heck, I've participated in a few of them. However there's been a problem in all of these threads. Noone has the data necessary to make fair comparisons of the ship due to the nature of how it has been earned up to now. I'm not going to sit here and tell you I've got flawless data here and that whatever my conclusion is, is perfect. It isn't. 

 

However I do have a history in quality control. At one time I ran the quality control departments for two production facilities simultaneously. It wasn't always the case, but most of the time I legally had to follow a strict 6% variance policy. As long as our products were within +/- 3% of our target values, we were fine. Well, there aren't any "target values" for ships in WoWs, so I figured I'd work off of a 6% variance. I just took ye ole +/- 3% and decided to work with the whole spectrum of it. 

 

Now, how would I attain fair data? Well, to be honest it's quite impossible to actually get flawless data with what we're given in the API, but I figured I could get close... at least within reason. So I made a spreadsheet... yeah yeah yeah... another one. Blame the company I did quality control for, that's where I learned it. 

 

Here's how it works: 

1) Auto-import the top 99 players from wows-numbers for a given ship. In this case, there were only 66 players that have the necessary 80 battles to qualify for the top list on wows-numbers, so 66 players it is. It turns out that those 66 players have (at the time of this study) played a combined 11,005 battles. 

 

2) Look up another ship. By selecting a second ship, the sheet looks up every single of the 66 players and finds all of their t10 ships. From that data I was able to extract their data for the selected ship. 

 

3) Purge data that doesn't correlate. Any player that had data on the second ship that didn't have at least 80 battles was purged from the list. Their stats were purged from both the Stalingrad list as well as the secondary ship's list. This of course drops the sample sizes.

 

4) Weight the data. Stats brought in on both ships were weighted by the number of battles in order to create a single variable for each statistic brought in. This way (for example) a single Stalingrad win rate variable may be compared to a single win rate variable from the secondary ship.

 

Once the above was completed, I started looking through the data. Now, while the 6% variance seemed to work quite well for win rate, average frags (kills), and average damage, Some of the older ships had an insane number of average battles. Due to this I was generous and increased the variance for average battles to 15%. Better to err on the side of caution. I have only taken screenshots of the compiled and processed data. I've left out the parts with the individual player names.

 

I started out by comparing it to the Des Moines and the Zao, since they are the two oldest CA's in the game. Their data is of course the oldest and most out of date. These two ships would be a lot of the applicable players first T10 ships.

 

705730063_1Zao.thumb.JPG.94b23ab999294e3acf85b0a19666721d.JPG212776310_2DesMoines.thumb.JPG.e3d78aa50943c72054d8cb9dcf751ea3.JPG

 

I was very shocked to say the least (sarcasm of course). The Zao completely dominated the third season of Clan Battles, the meta during Season 3 almost entirely revolved around countering the legendary mod Zao. However yes, the Zao using these metrics is completely thrashed by the Stalingrad. I was actually shocked a bit (no sarcasm) to see that the Des Moines wasn't nearly as dominated by the Stalingrad as I thought. Despite being one of the oldest ships in the entire game, and having years of players making it their first T10 ship in the game, it was within the tolerances for both win rate and average frags (kills).

 

The next few ships I looked at I like to think of as the middle generation of tier ten cruisers. This would be the Hindenburg, the Moskva, and the Minotaur.

 

1444810467_3Hindenburg.thumb.JPG.b08b392346d2e51acc717064e4db1d77.JPG1562206112_4Moskva.thumb.JPG.586536c907863100064a738a7794a4ec.JPG1129687061_5Minotaur.thumb.JPG.58562b2ecad452a4a26667bcb7918d2f.JPG

 

When I first saw the Hindenburg data I was blown away. I couldn't believe it. I average somewhere around like 160k on my Hindenburg, with a brutally good win rate too. I thoroughly did not expect to see the Hindenburg thrashed like that by the Stalingrad. Then I remembered that for like half of the Hindenburg's life, it didn't have the 1/4 HE pen buffs. But either way, It is statistically defeated by the Stalingrad at this time.

The Moskva is the first of the T10 CA's that actually meets the bloated battles tolerances, and despite going the vast majority of its existence without its amazing legendary mod or its fantastic 50 mm lower bow plate, it's win rate is actually within the tolerances. Shocked again I was. 

The Minotaur is substantially newer than either the Hindenburg or the Moskva, and her stats prove this out. Her average damage is substantially lower than the Stalingrad's, however its win rate and average frags are both within the tolerances.

 

 

The last three are the newest tier ten cruisers out there. They are the Henri IV, the Salem, and the Worcester.

 

928947601_6HenriIV.thumb.JPG.5370d63506c92422c6727da2bb2a379e.JPG166558261_7Salem.thumb.JPG.00585a89c5861f5f751ec1cc723f4091.JPG466287172_8Worcester.thumb.JPG.c26375f85924dba5f204ab1b20537894.JPG

 

Then Henri IV was recently buffed dramatically with its uber monster dpm buff of a legendary mod as well as its Clan Battle meta defining Main Battery Reload Booster. I'm really not sure if those buffs are reflected here or not. Only the players that played these ships and Wargaming would know that. Either way, She is within the tolerances for every statistic except for average frags, which of course makes sense due to how far back one needs to play the Henri IV. 

 

The Salem data is only here as an attempt at being thorough. There was only one single player that was a part of the 66 Stalingrad data set that also had over 80 battles in the Salem. This data is straight up worthless.

 

The Worcester is the only ship that beats the Stalingrad. They're within the tolerances for both of the ships, but unlike every other example where Stalingrad is edging out the other ships, it is the Worcester that edges out the Stalingrad in every category except for battles. The Worcester is also the only ship to be within 2% of the Stalingrad average battle count. Their data is the most similar, as well as the newest. 

 

 

- TLDR -

I don't really know if the ship is over powered or not. Personally I don't think so. Though the data (like all data), I retrieved can be manipulated and interpreted in many different ways. Plus, it's fundamentally flawed since large swaths of it are going to be sorely out of date with me having no way to logically excise that out of date data. Yes, the Stalingrad seems to be stronger than the vast majority of other tier ten cruisers out there. However she is not the top dog of T10, as that crown rests with the Worcester. The other trend I noticed, is that as one travels through the data from oldest ship to newest ship, the Stalingrad goes from overpowered, to right in line, to slightly behind. Plus, you know... this all comes from random battles.

 

I tried to be brief!

 

Link to the same sampling process, but using data from the EU server.

Edited by Spartias
  • Cool 14
  • Boring 1

Share this post


Link to post
Share on other sites
505
[90TH]
Members
1,096 posts
9,811 battles
57 minutes ago, Spartias said:

Well, to be honest it's quite impossible to actually get flawless data with what we're given in the API

Yep.  I wish more people understood this.  The PR rating on WoWS-numbers is absolute trash that’s completely made up by some guy with zero knowledge of statistics, yet I hear people brag about their PR all the time like it means anything.

Your attempt to compare ships within the same sample of players is a reasonable approach, but also fundamentally flawed.  Cherry picking only the top players immediately invalidates the results, IMO.  You are not measuring the performance of the ships in a general population. You are measuring the height of a ship’s skill ceiling. It also invalidates the variances, since you have a very strong sample bias. You’ve artificially narrowed the variance by selecting only a thin slice of the player base: similarly-skilled players will have similar stats, with a tight variance due to your sample selection procedure.

Also, with a sample size on the order of 60 players, 100 games... ugh.

There’s other problems, too. Like the data mixes together games from before and after multiple buffs and the introduction of legendary modules.

Thanks for trying, and thanks for good awareness of the difficulties, but ultimately I don’t think the results have much meaning if any, sorry. Like you said, the necessary data is simply not available from the API.

Edited by n00bot

Share this post


Link to post
Share on other sites
1,176
[XBRTC]
Members
2,987 posts
9,666 battles
56 minutes ago, n00bot said:

Thanks for trying, and thanks for good awareness of the difficulties, but ultimately I don’t think the results have much meaning if any, sorry. Like you said, the necessary data is simply not available from the API.

 

but it IS available from MXStat.

Now... if only you had a potato-ish player who was willing to share all the interesting data, and who has a bunch of the T10 cruisers, including Stalingrad...

Share this post


Link to post
Share on other sites
65
[PLPT]
Members
214 posts
9,946 battles
8 hours ago, Spartias said:

- TLDR -

I don't really know if the ship is over powered or not. Personally I don't think so. Though the data (like all data), I retrieved can be manipulated and interpreted in many different ways. Plus, it's fundamentally flawed since large swaths of it are going to be sorely out of date with me having no way to logically excise that out of date data. Yes, the Stalingrad seems to be stronger than the vast majority of other tier ten cruisers out there. However she is not the top dog of T10, as that crown rests with the Worcester. The other trend I noticed, is that as one travels through the data from oldest ship to newest ship, the Stalingrad goes from brutally overpowered, to right in line, to slightly behind. Plus, you know... this all comes from random battles.

 

I tried to be brief!

I agree with this, with the caveat, that I think the Hindenburg, even Zao are not fairly represented. As you point out, the 1/4 pen buff for the Hindy was a significant buff, and it's HE is an effective counter to the Stalin at range. The Zao legendary with its improved dispersion, and torpedo range increase are not to be overlooked either, and may not have had enough time to properly influence the stats. The IFHE Henri build is capable of pummeling the Stalin! 

I agree with your statement, I would only remove the word "brutally" overpowered. It is a beast though to be sure, so much more so in random battles. <o

Edited by Zairinzan

Share this post


Link to post
Share on other sites
65
[PLPT]
Members
214 posts
9,946 battles
4 minutes ago, Abides said:

whether it is or isn't OP or not is debatable.. those lazer guns are lethal.

Indeed, if those guns see your side, it's not gonna tickle.

Share this post


Link to post
Share on other sites
505
[90TH]
Members
1,096 posts
9,811 battles
8 hours ago, LT_Rusty_SWO said:

but it IS available from MXStat.

Now... if only you had a potato-ish player who was willing to share all the interesting data, and who has a bunch of the T10 cruisers, including Stalingrad...

I had a long conversation with @SnipeySnipes, founder of MXStats about WoWS stats and his dataset. He is legit, but unfortunately there’s still a massive sample bias in the dataset, which anyone can understand: People only upload their best games as replays. Also, his data is now old and has limited applicability to questions about newer ships like... is the Stali OP in the current meta? 

If anyone can tease results out of that mess, Snipey can, but it’s a massive challenge.

IIRC, one of his findings was that damage correlates 40% with win rate (r-squared=0.4) Damage dealt was the biggest factor he found for WR, but to me, 40% is a relatively weak correlation compared to the way so many people absolutely worship Damage Dealt.

And by the way, WoWS-number’s Personal Rating amplifies the weight of damage relative to win rate, which means to me that PR is immediately worse than simple WR, without bothering to do any further analysis.  If two players have the same win rate, I want the one with lower PR on my team!

Also, I definitely recommend reading Snipey’s blog if you’re interested in warships statistics. He’s a pro.

Edited by n00bot
  • Cool 1

Share this post


Link to post
Share on other sites
468
[OO7]
Members
1,403 posts
8,212 battles
9 hours ago, n00bot said:

Yep.  I wish more people understood this.  The PR rating on WoWS-numbers is absolute trash that’s completely made up by some guy with zero knowledge of statistics, yet I hear people brag about their PR all the time like it means anything.

Your attempt to compare ships within the same sample of players is a reasonable approach, but also fundamentally flawed.  Cherry picking only the top players immediately invalidates the results, IMO.  You are not measuring the performance of the ships in a general population. You are measuring the height of a ship’s skill ceiling. It also invalidates the variances, since you have a very strong sample bias. You’ve artificially narrowed the variance by selecting only a thin slice of the player base: similarly-skilled players will have similar stats, with a tight variance due to your sample selection procedure.

Also, with a sample size on the order of 60 players, 100 games... ugh.

There’s other problems, too. Like the data mixes together games from before and after multiple buffs and the introduction of legendary modules.

Thanks for trying, and thanks for good awareness of the difficulties, but ultimately I don’t think the results have much meaning if any, sorry. Like you said, the necessary data is simply not available from the API.

 

I agree on all of those fronts, however I do think that it's the closest we are going to get to accurate data with what we've got. 

 

Interestingly enough by the way, the fact that there were only 66 players that have the applicable number of battles to make the wows-numbers top 100 list actually works out to an extent. They are not all similarly skilled players.

 

When I divided up the base sample set of 66 players into quarters, the majority of players were in the low 60% win rate bracket, however there were still 14 players that rested in the low/medium 50% range as well. The spread was quite mathematically beautiful. But yes, very small sample size of players. Though with the exception of the Salem, this methodology was able to render comparisons in excess of 2,000 battles every other time. It'll be interesting to see how the data continues to pan out as more and more players enter the wows-numbers top 100 list for the ship, thereby increasing the sample size.

 

Nothing will ever cure the old data problem however. That caveat will forever remain unless Wargaming gives us a way to identify how old all of the data is.

Share this post


Link to post
Share on other sites
468
[OO7]
Members
1,403 posts
8,212 battles

I've run the same sampling process using EU data. They have a full list of top 100 players, making their base sample of applicable Stalingrad players much larger.

 

For comparison, here is the NA base Stalingrad Sample before purging it of any data to equalize it to another ship:

1420967333_NABaseSampleStalingrad.thumb.JPG.58da5e2c8948f21ceaaea39fedf475a9.JPG

 

The next image is of the same data, however it's the EU top 100 players for the Stalingrad:

287132074_EUBaseSampleStalingrad.thumb.JPG.c912ea78ef06b2c33f005ec2dda49a6a.JPG

 

 

I started out with the same two ships to compare, the Zao and the Des Moines. 

 

884603755_EU1Zao.thumb.JPG.2d22858d80e981cb6e539b38c4300d9d.JPG1795267554_EU2DesMoines.thumb.JPG.a8603e60f71dce809edbb09a4871165a.JPG

 

Once again, I'm really not sure that there's anything at all to be learned from these two ship since a lot of their data is extremely old. Heck, I can't tell if the number one player in the Zao played it when it could still stealth fire or if the number five Des Moines player was a beast in the ship back when it didn't have any radar. 

 

Next we'll look at the EU Hindenburg, Moskva, and Minotaur.

 

957262326_EU3Hindenburg.thumb.JPG.3e794d38e2f55bf3bf2b41dc0ef27276.JPG2129150942_EU4Moskva.thumb.JPG.5aa34e5b1a626ad6ff6546620ebda37b.JPG1514725271_EU5Minotaur.thumb.JPG.a5e512287c01f656467207dd724abbef.JPG

 

See, there are the Hindenburg stats I thought I'd see on the NA data... but here they are misplaced over in the EU! Seriously guys... this is embarrassing how much better the EU data looks so far on just about every single ship. A lot of the same comments that I made about the NA subsets of data seem to be similar to over here though when it comes to the trends. The lower the variance on the average battle count, the more in line they seem to be. Although holy jeebus those guys over there in the EU really like their Moskva play compared to us! They've played it so much more than we have! With that many battles in the sample, I'd wager (cannot prove) that a lot of those Moskva battles were far before she ever got her recent buffs.

 

And now our last three ships, the Henri IV, Salem, and the Worcester.

 

2071200345_EU6HenriIV.thumb.JPG.eff170099510e8c857032033f1e62645.JPG1679956362_EU7Salem.thumb.JPG.4ce969155aed658cea987473e78b3ce7.JPG942006778_EU8Worcester.thumb.JPG.69e4c24408cfb9951dbbd13b4e9562ed.JPG

 

Once again the Salem is pointless to even glance at. There's only a single EU player, (just like that single NA player) that qualifies for sampling under these conditions. The Henri IV and the Worcester however both fall within the tolerances for average battles. The Henri IV lags behind in every category that isn't battles, and is out of the tolerances for avg frags. Though once again, that's likely due to how she's played. The Henri is supposed to be played far back, where getting the final blow is far more difficult to do consistently. Unlike on NA, where the Worcester actually leads the Stalingrad, it is the Stalingrad that edges out the Worcester on EU. They are within the tolerances for every single category however.

 

So once again, the two newest sets of data lean towards an in tolerance Stalingrad while out of date data leans towards an over powered Stalingrad. The older the data, the more the Stalingrad looks over powered while newer data says that yes she's strong, but within acceptable bounds.

  • Cool 1

Share this post


Link to post
Share on other sites
77
[SALTY]
Supertester
230 posts
6,518 battles

@Spartias I love the work that you did above! Well Done! 

I would agree that it is still 'unclear' whether or not the Stalingrad is OP. When comparing apples to apples, like you've done, she gets beaten out by the Worcester. A solid test would be to put similarly skilled players 1v1 against each other - Worcester v Stalingrad - in 100 matches to see how that works. You could also do this in a training rom with bots. Here nor there. The long and short of it though, is due to the nature in which players get this ship, they are typically in the top quartile of players, compared to say the Hindy in which anyone can get. Good players in a strong ship will make that ship look OP, particularly against your average weekenders.

Share this post


Link to post
Share on other sites
1,176
[XBRTC]
Members
2,987 posts
9,666 battles
2 hours ago, n00bot said:

I had a long conversation with @SnipeySnipes, founder of MXStats about WoWS stats and his dataset. He is legit, but unfortunately there’s still a massive sample bias in the dataset, which anyone can understand: People only upload their best games as replays. Also, his data is now old and has limited applicability to questions about newer ships like... is the Stali OP in the current meta? 

 

wth are you even talking about? MXStat is a personal datamining software that sifts through the WoWs replays and log files on your hard drive to find your stats. It's made by a Russian dude. It doesn't depend in any way on other people uploading replays.

Share this post


Link to post
Share on other sites
468
[OO7]
Members
1,403 posts
8,212 battles
1 hour ago, SnipeySnipes said:

@Spartias I love the work that you did above! Well Done! 

I would agree that it is still 'unclear' whether or not the Stalingrad is OP. When comparing apples to apples, like you've done, she gets beaten out by the Worcester. A solid test would be to put similarly skilled players 1v1 against each other - Worcester v Stalingrad - in 100 matches to see how that works. You could also do this in a training rom with bots. Here nor there. The long and short of it though, is due to the nature in which players get this ship, they are typically in the top quartile of players, compared to say the Hindy in which anyone can get. Good players in a strong ship will make that ship look OP, particularly against your average weekenders.

Thank you for the kind words! I read your stuff a while ago, but I was driving.

 

I could get useable data the way you're suggesting. However that only leaves me with a sample size of 100 battles per ship. I went with the api mining method as even when presented with 15-20 players it still yields a comparable battle count of around 2,000 to 3,000. And that's just for one side. It'll be comparing that against another 2-3,000 battle count sample.

 

Your method would definitely solve the aging stats problem that cripples my methodology. However it is subject to corruption as well. What if I went into this with a mission to prove that the Stalingrad was OP or not OP? I'm playing against bots in your scenario. I can totally make that data look however I want. But what if I didn't do that? What if I was honest? Is there any way to figure out if I was honest or not? It's just blind trust at that point. 

 

It's why we'll never really know. That's why I just added the EU data. The more data presented, the more it can be sifted through.

Edited by Spartias

Share this post


Link to post
Share on other sites
505
[90TH]
Members
1,096 posts
9,811 battles
20 minutes ago, LT_Rusty_SWO said:

wth are you even talking about? MXStat is a personal datamining software that sifts through the WoWs replays and log files on your hard drive to find your stats. It's made by a Russian dude. It doesn't depend in any way on other people uploading replays.

Hoping Snipey jumps in to clarify everything, but he worked with a second guy, probably the one you mentioned. He analyzed that data and published some analysis before WG changed the security key on the replay files. Now only wowsreplays has the replay encryption key and MXStats’ data is old, from before they key was changed. wowsreplays now has an effective monopoly on replay files, granted by WG. No one can start a replays site without cracking the file’s encryption, and MXStats has been broken for a while.

The sample bias is still there. People who download third party apps tend to be serious players not potatoes.

Edited by n00bot

Share this post


Link to post
Share on other sites
1,176
[XBRTC]
Members
2,987 posts
9,666 battles
52 minutes ago, n00bot said:

Hoping Snipey jumps in to clarify everything, but he worked with a second guy, probably the one you mentioned. He analyzed that data and published some analysis before WG changed the security key on the replay files. Now only wowsreplays has the replay encryption key and MXStats’ data is old, from before they key was changed. wowsreplays now has an effective monopoly on replay files, granted by WG. No one can start a replays site without cracking the file’s encryption, and MXStats has been broken for a while.

The sample bias is still there. People who download third party apps tend to be serious players not potatoes.

 

Again:

MXStat is not a website.

It's a standalone data mining tool. And it still works just fine. Here's a sample of the data that it exports.

sample.xlsx

 

And, re: serious player vs potato... there are plenty of people who spend a lot of time playing this game but aren't particularly spectacular at it.

Edited by LT_Rusty_SWO

Share this post


Link to post
Share on other sites
3,389
[CRMSN]
Members
8,805 posts
9,670 battles

I've yet to meet a Stali I couldn't smack down with Zao. To me, Stali only becomes OP when there's more than 1 on the same team in CB's, and I've held my own with Zao against 2 of them. The issue with Stali that causes it to seem OP is that it's mostly in the hands of really good players right now. As its presence in the game increases, its stats will drop dramatically.

  • Cool 1

Share this post


Link to post
Share on other sites
77
[SALTY]
Supertester
230 posts
6,518 battles

@n00bot I didn't work on the MXStats program, I only worked with Jammin411 (the original designer of MatchMaking Monitor) on a new statistical measure for player skill. Further Lt_Rusty is correct, MXStats is a personal datamining program that gets ONLY your information from your replays and log files.

And to Spartias, to clarify, play either 2 equally skilled players against eachother OR bot vs Bot.

Share this post


Link to post
Share on other sites
468
[OO7]
Members
1,403 posts
8,212 battles
1 hour ago, SnipeySnipes said:

And to Spartias, to clarify, play either 2 equally skilled players against each other OR bot vs Bot.

Interesting idea.

 

However, that's not representative of what ships are strong at what though.

 

The Zao is strong at stealthing up and hitting at opportune moments.

The Henri IV is strong at high dpm at range as well as fast redeployment.

The Hindenburg is strong in a brawl or the slow kite at range.

The DM is strong close around an objective where it can use line of sight and its radar.

etc...

 

A single one versus one does not represent what ship is stronger than another at influencing their match towards a win. Randoms, as well as competitive are full of variables that make a single one versus one irrelevant data for the most part. It takes data from randoms or competitive to represent randoms or competitive. Clan Battles statistics are not readily available in the API, but random battle data is.

Share this post


Link to post
Share on other sites
1,176
[XBRTC]
Members
2,987 posts
9,666 battles
13 hours ago, Spartias said:

A single one versus one does not represent what ship is stronger than another at influencing their match towards a win. Randoms, as well as competitive are full of variables that make a single one versus one irrelevant data for the most part. It takes data from randoms or competitive to represent randoms or competitive. Clan Battles statistics are not readily available in the API, but random battle data is.

 

There's all sorts of things that you can tease out of that spreadsheet that I sent you, and it has plenty of clan battles in it with full data. Look at the percent of the battle survived, for instance, between Stalingrad and Moskva. Look at average hit percentage between them. That sort of thing. Since you even have the complete makeup of both teams, you should even be able to make an educated guess at correction factors for enemy skill / friendly skill and how that affected whether a battle was won or lost, and since you have a year's worth of battles there, you can see trends over time.

Share this post


Link to post
Share on other sites
1,194
[WOLF2]
Members
4,310 posts
16,708 battles
3 minutes ago, LT_Rusty_SWO said:

 

There's all sorts of things that you can tease out of that spreadsheet that I sent you, and it has plenty of clan battles in it with full data. Look at the percent of the battle survived, for instance, between Stalingrad and Moskva. Look at average hit percentage between them. That sort of thing. Since you even have the complete makeup of both teams, you should even be able to make an educated guess at correction factors for enemy skill / friendly skill and how that affected whether a battle was won or lost, and since you have a year's worth of battles there, you can see trends over time.

There is one problem with all your beancounting and it's people

Applying numbers to human behavior is itself a contradiction in terms - try though you might :)

Warfare, business, love … none of it makes any damn sense 

Share this post


Link to post
Share on other sites
1,176
[XBRTC]
Members
2,987 posts
9,666 battles
4 hours ago, Commander_367 said:

There is one problem with all your beancounting and it's people

Applying numbers to human behavior is itself a contradiction in terms - try though you might :)

Warfare, business, love … none of it makes any damn sense 

 

You'd be surprised, really.

If, for instance, you have low secondary rounds fired from a GK and the player has an average of surviving through 93% of any battle, you can infer that the person wasn't being aggressive enough. OTOH, if you have a high count of secondary rounds fired from, say, a Montana, and that player dies at, on average, the 14% mark of any battle, you can infer that the person has a problem with being too aggressive. Numbers can help inform a person and suggest different choices for the next time around.

Share this post


Link to post
Share on other sites
1,194
[WOLF2]
Members
4,310 posts
16,708 battles
3 hours ago, LT_Rusty_SWO said:

You'd be surprised, really.

If, for instance, you have low secondary rounds fired from a GK and the player has an average of surviving through 93% of any battle, you can infer that the person wasn't being aggressive enough. OTOH, if you have a high count of secondary rounds fired from, say, a Montana, and that player dies at, on average, the 14% mark of any battle, you can infer that the person has a problem with being too aggressive. Numbers can help inform a person and suggest different choices for the next time around.

You can make a few generalized guesses regarding past performance with the benefit of hindsight, but anyone can do that … 

Predicting future events with any accuracy just does not apply to individual behavior or social interactions 

We all have our pet theories and search for patterns but even the best beancounters on Wall Street will admit they don't know any better than you or I what will happen the next day

Anybody who suggests they do is obviously lying :)

Share this post


Link to post
Share on other sites

  • Recently Browsing   0 members

    No registered users viewing this page.

×