Since I am playing Fantasica let's use it as an example. In most instances I will use pseudo-language similar to R, but they should be clear to most people with basic programming knowledge.

Model 1: The tower event.

Situation: Given a tower of a fixed number of floors. Each floor contain a certain number of steps and each step takes a certain amount of AP(action point) to reach so. The number of steps required for each floor varies for each 40 floors and number of steps required for each floor varies every 10 floors. There is a certain chance that you meet monsters with AP consumed but no progress is made. Each player has an AP cap that they can use at a time and he can't climb if the amount of AP he has is less than the required amount. Using a potion fills the AP bar completely. Every floors of multiple of 10 is the 'boss floor' where no AP needs to be consumed to beat the boss.

Assume no recovery of AP and no failure against boss. Simulate a distribution on number of potion required to complete climb the whole tower.

----------------------------------

This is rather easy. Let's use an analogy: climbing over a floor is like flipping a (biased, probably) coin and you go a step forward every head. Can you name this RV? Yes this is negative binomial distribution.

The steps required for each floor doesn't matter as we group the number of steps required to climb over the whole region with constant step AP consumption. We know that if every step consumes x AP and the AP cap is N, then every potion allows us to go for [N/x] steps. We can simply add them up. In order to use the AP left when we go from one region to another we can manually add another line to handle it. It can be easily handled if we assume that 1 potion is never sufficient to go for one region. More modification is necessary if such an assumption was not met.

How can we find the monster encountering rate (from a player's view)? This is easy with a few trials.

Assume the monster encounter rate to be p and let divide the tower into k region depends the variation of AP for each steps required. a_1...a_k AP is required for region 1 to k and s_1...s_k be number of steps required to go through region k. Define a function depends on p, N,a_i and s_i:

tower = function(p,N,a,s){

l = length(a)

pot = 0

first = T

for (i in 1:l) {

step = rnbinom(1,s[i],1-p)+s[i]

spp = floor(N/a[i])

if (first == F) {

step = step - floor(AP/a[i])

}

pot = pot + floor(step/spp)+1

}

return (pot)

}

Practical example.

a = (14,16,18), s = (220, 330, 480), p = 0.15, N = 300 (Close to the real data in Fanta)

Observation: a small variation that the effort needed before reward is almost deterministic. It favors free players to have to go...but don't forget that game developers are always cruel against free player! The reward of lap completion turns out to be random, which is quite annoying....but that's another story.

Model 2. Panel event.

If one do not play Fantasica they will have difficulties to understand where 'panel' comes from, but this is not our concern in the model so let's ignore that.

Situation. Quests can be played with a 5 minutes cool-down. Assume that you planned to play the quest densely but you can't exactly launch quests every 5 minutes exactly (due to network, lack of concentration, etc) and the time wasted has a certain distribution.

Assume that the time consumed within the quest is less than 5 minutes. With a time given plot the distribution on how many quests you can play. To make the life even even easier, assume that the delay time has mean 10 seconds and are exponentially distributed.

-----------------------

Some with stats knowledge will know that it is very similar to a Poisson process. However there is a problem: the process is not memoryless as it must be at least 5 minutes after the last occurrence. How should we do?

We know that the time difference between two launching time of quests is still a random variable, probably 5 minutes + something we know, call it as RV X. It would not be wise to simulate every delay but we can do some bootstrapping.

The idea is, we first generate like, N (large) samples of the delay time and re-sample from them, and by using the idea of cumulative sum we know that how many time one can play within the given time interval.

panel = function(ttot,tque,tdemean) {

delay = rexp(1000,1/tdemean)

N = 10000

xs = numeric(N)

step = ceiling(ttot/tque)

for (i in 1:N) {

x = sample(delay,step,replace=T) + tque

y = cumsum(x)

xs[i] = length(y[y

hist(xs,freq=F)

d = density(xs,bw=.6)

lines(d,col='blue',lwd=2)

}

Note that the units should be unique throughout the process. Let's try a 12-day event with 20-hours every day... how many times can they play? Try ttot = 20*12*3600, tque = 300 (5 min), tmean = 10.

(er...forgive me not to change the title and label.)

Note that the mean is arguably (ttot)/(tque+tdemean). What about other distribution? It's likely that we may have a slightly more heavy-tailed distribution because we may sometimes leave from the phone for a long time, like going to toilet, taking bath...etc. We will have to consider these and the distribution will be a bit more varying. Of course in overall the variation is arguably small and can be easily deal with as such a randomness is quite controllable.

So for we are investigating on models that only involves one party. The parameters are fixed and it's only the matter of one player. What if there are interactions between clients and server side, or there are interactions among players? Such PVP (player v player) instances are much more unpredictable and we will have a lot on them next time.