Queueing Theory
Week 5: The Queueing Model
- A Proof for The Queueing Formula: L = lambda W.



This is the simplest implementation of the queueing model. All it has is just a conceptual server and a queue attached to it.
Example 3

Note: r here stands for rate.
Example 4

Requirement: in > S1 >S2 > S3 > S2 > out
Imagine a processing line in a factory. S3 may be quality checkers to make sure that the product is okay before output.
Here’s one advantage of the model. If we know the input rate and rates of every server, then it’s easy for us to analyse the output rate without being distracted by other aspects of the system.
Analysing a Queueing System
For the simplest system in Example 2, we are curious about how long it will take from in to out.
Terminologies

mu: mean service rate
lambda: job size
X: Throughput
As shown in the figure, the average service rate is determined by some actual data, supporting the idea that we actually determine the rate by performing some experiments on the system.


Note that the “response time” here is different from what it refers to in terms of operating system.
Performance Metrics

- Processing Time
- Queueing Time
- Transmission Time
- Propagation Time
- Response Time / Latency (T)
- Throughput (X)
- *Utilization
- Propagation Time
Propagation time means how long it will take to send one single bit.
Utilization Law

Proof: throughput (X) = average service rate (mu) x utilization (rho)
B: amount of time the system is busy
C: amount of items the system processed.
T: how long the users will actually wait, effectively determining the user experience.
Let’s consider that if the arrival rate (lambda) becomes twice as much, but the response time needs to keep the same, how fast should the mean response time (mu) be?

should the new mean service rate be 2-mu?
Note that we usually have an implicit assumption: the system is not overwhelmed; statistically, output rate is ~eq to input rate, i.e. X ~= lambda.
ANALOGY: “mu” is like capacity; “X” is like the actual bandwidth.
Therefore the new m.s.r. does not necessarily be twice as much as the original value.

Figure: Analysing the system provided in Example 4.
Little’s Law
E[N] = lambda E[T]
lambda: arrival rate
T: response time
N: amount of data in system
Usage 1: we can calculate the response time of a system by monitoring the other two metrics.
Usage 2 (analogy): 餐廳:如果知道客戶量跟翻桌率,則可以計算需要的座位數量
Usage 3 (analogy): 碩博實驗室:目前有幾個 / 一年收幾個 = 每個人待的時間
Derivation
N(t): amount of items in the system at t.
alpha(t): amount of items arrived in interval [0, t]
beta(t): amount of items departed in [0, t]
T_i: response time of the i-th arriving item.

Graph: Derivation Details
**
Week 6: The M/M/1 System and the Markov Chain
The M/M/1 System
M: Interarrival time follows exponential distribution.
M: Service time follows exponential distribution.
1: There’s only one server.
The Kendall notation

Figure: Interarrival Time Distribution.

States of Each (Discrete) Time.

State Machine and Probabilities of Transition.

A probability of transition is denoted as P_ij.
The Markovian Property: The conditional distribution of any future state is independent of past states and only depends on the present state.

Definition of the Markovian property.
(!) Linear Time-invariant, LTI
It makes it easier to analyze the system to assume the system is LTI.
Transition probability Matrix
Diving into how the states transit.
It turns out that matrix multiplications describe such transitions pretty well.
Note that it’s “轉移矩陣” in Chinese. It’s taught in math class in senior high, especially about the systems that only have two states.


Since we have the Markovian Property, every event is independent of each other. Therefore, the probability of a series of events is the product of the transformation matrix of each of them.

This is also taught in senior high.
Limiting Distribution
P_inf = lim(n->infinity) P_n.

We have the special case of M=2.

Stationary Distribution

pi_n: The probability of a system to transit to state n.
Binomial Distribution
Bernoulli trials b(k; n, p)
k: amount of successes
n: amount of trails
p: probability that the trial successes.
C(n,k) p^k q^(n-k)
Poisson Distribution
It’s a nice approximation since exponentiation has some nice properties.
lambda’: the sharp symbol is here just to distinguish it from another lambda later.


Proof.
Exponential Distribution
Poisson Process
Independent increment - number of arrivals in disjoint time intervals are independent
Stationary increment - number of arrivals in any intervals of length tao is Poisson distributed with parameter lambda tao.
(distributions: stationary, limiting, binomial & poisson, exponential); poisson process; markov chain; CTMC
Distributions
Transition Matrix
Transition Probability Matrix
Transform Probability from state to state
Notation
A probability distribution
Stationary Distribution
Stationary Distrbution with Transition Probability Matrix and Probability Distribution
Limiting Distribution
Limiting Distribution
If the state machine follows the limiting distribution, then for any other than , the probability to transform from state to state follows
(Recall) Binomial Distribution
Bernouli Trials
- : amount of sucesses
- : amount of trials
- : probability of sucess of single event
- : random variable that represents Bernouli trial with trials
Probability Evaluation
Poisson Distribution
Poisson Distribution
p(k; \lambda') = \frac{\lambda' ^{k}}{k!} \cdot e^{- \lambda'}- : amount of successes
- : rate
Derivation Let . When is very large and is very small,
Then we define such approximated relationship as Poisson distribution.
The proof is not yet here.
Exponential Distribution
P.D.F. is defined:
C.D.F. can be derived:
We take the compliment of the above relationship ():
Expected Value of Random Variable :
Theorem. If follows the exponential distribution, then
Proof.
Left =
= Right.
Poisson Process
Terms
- Poisson process is a stochastic process
- Poisson process follows the Poisson distribution
- : amount of arrivals in time interval
- Naturally,
- : rate
Following the Poisson distribution:
and it evaluates to
Properties
- Property 1: the interarrival times of a Poisson process are independent and exponentially distributed with rate .
- Property 2: if two or more independent Poisson processes are merged into a single process , then is a Poisson process with a rate equal to the sum of the rates of Ai for .
Markov Chain
Markovian Property and DTMC
The Markovian Property: The next state from present only depends on the present state , or
We denote the transition probability in DTMC by
DTMC is a stochastic (aka random) process that satisfies the Markovian property.
(Keyword?) Linear Time-Invariant
CTMC
Definition
: numbers of messages in system at time CTMC:
Diving into M/M/1
The M/M/1 System
The Kendall Notation: a X/Y/Z queueing system
- X: the distribution of interarrival times for the arrival process
- Y: the distribution of the service time
- Z: the number of servers
- Arrival Statistics (X = M)
- Data arrives according to the Poisson process with rate .
- The interarrival times follow an exponential distribution.
- Service Statistics (Y = M): The data service times follow an exponential distribution with rate . With 1. and 2., the number of data items in the next moment will only depend on the current number of data items in the systems.
Thus, to analyze the probability for a system to have data items, here we can apply theory of Markov chain.
Applying Theory of Markov Chain
Take CTMC with . We start from DTMC and then derive to CTMC. Consider time points
and let
denote the number of messages in the system at moment . Then,
is a DTMC. Then apply (limit) to derive to CTMC.
The Arrival/Service Statistics for M/M/1
- Messages arrival ~ Poisson process ()
- Interarrival times ~ Exponential Distribution ()
- : -th to -th
See [[#Poisson Process]].
Determining : (TODO)
Week 8: The Aloha System
Initial Story
-
Centralized System: Multiple Sender, One Receiver
-
RF Communication
-
Concept of “Time Slot”
- the fact that messages received in the same time slot “collides” and are invalid.

One way to prevent collision is TDMA, Time Division Multiple Access.
But here comes the down side.
-
What if one specific sender has so many messages to send?
-
If we have m senders, the average waiting time is about m/2.

Key factors of efficiency of the aloha system
-
rate of transmission
-
rate of re-transmission
-
amount of senders
e.g. if the rate of transmission is high, then it’s more likely to collide, then it’s more likely to re-transmit too.

Note that we can apply the queueing model here!

However, since the existence of re-transmission, the situation is kind of complicated. We will break it into two models, ones describing the worst and the best cases.

We view re-transmissions as separate nodes.
-
Worst-case assumption (m -> infinity)
-
Best-case assumption (no buffering)
The backlogged node, the node that is experiencing collision.
We use the Markov chain here, labeling each state with numbers of “the backlogged nodes”.


Week 9: Beyond Queueing Systems
Analyzing M/M/k/k
The CTMC for a M/M/k/k is a birth-death process:

At one moment there is at most one departure and one arrival.
Here means the steady-state probability that the system has items.
(WIP)
\begin{array}. \text{Time-reversibility equation} & \text{Simplified equation}\\ \pi_{0} \lambda = \pi \mu & \pi_{1} = \frac{\lambda}{\mu} \pi_{0} \\ \pi_{1}\lambda = \pi_{2} 2 \mu & \pi_{2} = \left( \frac{\lambda}{\mu} \right)^2 \frac{1}{2!} \pi_{0} \\ \dots \\ \pi_{k - 1} \lambda = \pi_{k} k \mu & \pi_{k} \left( \frac{\lambda}{\mu} \right) ^{k} \frac{1}{k!} \pi_{0} \end{array}Here,
means the “flow” from state must equals that from state , which is supported by the fact that (…)
- Global Balance Equation -> Balance Equation
- Detailed Balance Equation -> Time Reversibility Equation
With this, we are able to acquire .
for . (Erlang-B Formula)
Blocking probability,

Stationary Probabilities
(from about page 13:)
Minimum Resource Requirement
We have
viewing from total service capacity, and
viewing from each single server.
Here means utilization.
Expected number of busy servers, , is acquired by
Probability for an arriving job to be queued, , can be described as that for the event when an arrival finds all servers busy and when an arrival finds at least jobs in the system. Then we have Erlang-C formula stating that
Week 9: Beyond Queueing Systems

Analyzing M/M/k/k
The CTMC for a M/M/k/k is a birth-death process:

At one moment there is at most one departure and one arrival.
Here means the steady-state probability that the system has items.
Here,
means the “flow” from state must equals that from state , which is supported by the fact that (…)
- Global Balance Equation -> Balance Equation
- Detailed Balance Equation -> Time Reversibility Equation
With this, we are able to acquire .
for . (Erlang-B Formula)
Blocking probability,

Stationary Probabilities
(from about page 13:)
Minimum Resource Requirement
We have
viewing from total service capacity, and
viewing from each single server.
Here means utilization.
Expected number of busy servers, , is acquired by
Probability for an arriving job to be queued, , can be described as that for the event when an arrival finds all servers busy and when an arrival finds at least jobs in the system. Then we have Erlang-C formula stating that
Week 10: Error Detection
- Cyclic Redundant Check
- Error Correction Code
S(D): Polynomial for the information
C(D): Polynomial for the parity checks
X(D): Polynomial for the message sent
g(D): a generator polynomial
Week 11: Tolerating Host Failures
SPoF, Single Point of Failures
e.g.
Strategies
- State Machine Approach, aka Active Replication
- Primary Backup Approach, aka Passive Replication
State Machine Approach
The key point is clients want to only
- updates data for server
- fetch data from server. So we want to “mask failures.” Solution: Add more servers in the service.


