by Anonymous user, 1235182017|%e %b %Y

It would be interesting if the DPS for each individual combat segments was recorded and used for statistical analysis of the results. There are several uses for this information.

The most interesting to me would be determining uncertainty and confidence intervals for calculating EP values. It's difficult to determine how long you should be simming for without these types of measures, aside from blindly listening to other people's empirical judgements of 'at least 10000 hours'. Using this data, the sim could output a 99% certainty interval for each EP value, which would give a much better idea of how precise the results are.

Additionally, it would give people a better understanding of how much they can expect their DPS to deviate on a single fight due to random chance. An upper and lower DPS bound, or x% confidence intervals, for a given input would make it less confusing for people when, on a given fight, they have a large deviance from the expected results.

I've been wanting to see this feature as well. Mainly for the upper and lower DPS bound to see how much variance different playstyles have.

It sounds nice but you could have a fight where all you did was range dps or off heal. There is too much lee way in player behavior. Perhaps you can clarify?

Can you give me the specific algorithms to determine the uncertainty and confidence intervals for EP calculations?

The value we're interested in is the standard error of the mean (http://en.wikipedia.org/wiki/Standard_error_(statistics)).

Assuming that over the course of a reasonably long sim run, the measured DPS values for each combat segment are normally distributed (almost certainly true due to the central limit theorem and I've tested it graphically by outputting the dps every segment for a 1000h run), we can calculate a maximum error value for any desired certainty.

- Calculate the sample mean (m) and standard deviation (s) for DPS across all recorded combat segments.

- Choose a confidence interval and find the appropriate Z value (e.g. for 99% it's 2.576) (calculated using erf, generally numerically: http://en.wikipedia.org/wiki/Normal_distribution#Standard_deviation_and_confidence_intervals)

- The standard error of the mean given n readings is s/sqrt(n)

- Your DPS for this run is thus within the interval [m-z*s/sqrt(n), m+z*s/sqrt(n)] (with 99% certainty if z=2.576).

This number is already useful, as it tells the user that 99 of out 100 times you hit the simulate button, you'll get a value in this range. It lets you know if you're using enough hours for simulate.

As for EP calculations… you assume the worst-case values in the ranges as you do your calculations.

I'll type it out for per-point DPS calculations, because it's slightly simpler, and EP is 1 step away from it:

- Calculate these intervals for each of the runs being done (base, AP, agi, crit, etc.)

- Do the DPS-per-point calculation using the minvalue for the stat and maxvalue for the base to get the minimum DPS-per-point value

- Do the DPS-per-point calculation using the maxvalue for the stat and minvalue for the base to get the maximum DPS-per-point value

So if your base DPS is [6000,6050] and your DPS with +20 crit is [6100,6150], the additional DPS you get per crit is [(6100-6050)/20, (6150-6000)/20] = [2.5,7.5].

Let me know if any of that is unclear.

## Post preview:

Close preview