Starting from:

$30

EE381-Lab 6 Markov chain Solved

A machine can be in two possible states A and B. The two states A and B are designated by the numbers 1 and 2 respectively.

The state of the machine at time (k+1) is determined by the state of the machine at the previous time step k : 

•        If the state at k is A, then the state at (k+1) will be A with probability p11

•        If the state at k is A, then the state at (k+1) will be B with probability p12

•        If the state at k is B, then the state at (k+1) will be A with probability p21

•        If the state at k is B, then the state at (k+1) will be B with probability p22

These state transitions are shown schematically in Figure 1. Also, Figure 2 shows the same information on state transition probabilities in a more compact form.

 

                                                                 22

              

              Figure 0.1                                                                    Figure 0.2                        
                                                             

 

Clearly: p11 + p12 =1 and p21 + p22 =1. 

 

 For this example the transition probabilities have the following values: 

                                p11 = 0.6     ; p12 = 0.4       ; p21 = 0.5       ; p22 = 0.5

 

The initial state is randomly distributed between state “A” and state “B”. 

The probability of the initial state being “A” is given as: Prob (Initial State = A) = 0.2

The probability of the initial state being “B” is given as: Prob (Initial State = B) = 0.8

 

The graphs below show several simulation  results for this two-state Markov chain. The chain is run for n=10 time  steps. 

 

Figure 0.3 shows a single simulation run of the chain. The initial state of  the chain is “A”. The graph shows the evolution of the chain for the next 15 steps. In this particular simulation the chain goes through the following sequence of states, after the initial state “A”: [A AAAABBBBA] 

 

Figure 0.4 shows another single simulation run of the chain. In this case the initial state of  the chain is “B”. The graph shows the evolution of the chain for the next 15 steps. In this particular simulation the chain goes through the following sequence of states, after the initial state “B”:

[B BABAAAABB] 

 

 

 

                                                          

 

 

                                 Figure 0.3                                                                              Figure 0.4
 

             

Figure 0.5 shows the combined results from N=10000 simulation runs of the chain. It is seen that the chain starts at state “A” 20% of the time, and  at state “B” 80% of the time. The graph shows the evolution of the chain for the next 10 steps. 

 

Figure 0.6 shows the results of the state evolution using the “State Transition Matrix” approach.

The probabilities are CALCULATED based on the transition matrix. The probabilities they ARE NOT derived as the result of N=10000 simulation runs, as was done in Figure 0.5. It is noted that the results of Figures 0.5 and 0.6 are very similar, almost identical, confirming the  fact that the state transition matrix approach can be used instead of a step-by-step simulation.  

 

 

 



 

                            Figure 0.5                                                                               Figure 0.6

                

PROBLEM 1.  A three-state Markov Chain
 

 

Follow the previous code as a guide, and  simulate the Markov chain defined in problem 11.1 of the AMS text by Grinstead & Snell, p. 406 (reference [1]).

 

For this problem use the provided values of the state transition matrix and the initial probability distribution. 

1.     Run each experiment for  n=15 time steps. In order to obtain meaningful statistical data perform a total of  N =10,000 experiments. 

2.     After your experimental simulations are completed, use the State Transition Matrix approach and compare the results. 

 



•        One plot showing  one single-simulation run for n=15  steps, similar to Figure 0.3, but for a  three-state chain. Make sure that your plot has the appropriate labels and title.

•        A plot with the simulated probabilities for the states R, N, S, similar to Figure 0.5

•        A plot with the probabilities obtained through the state transition matrix approach for the three states R, N, S, similar to Figure 0.6

•        The code in an Appendix

•        Make sure that all the plots are properly labeled

 

 

PROBLEM 2.  The Google PageRank Algorithm
 

 

This problem presents an introduction to the algorithm used for ranking web pages for searching purposes. The algorithm was developed by Google's founders Page and Brin with Motwani and Winograd, and was first published in 1998 (see reference [2]). The PageRank algorithm allowed Google to rise to the top of all web search engines within a matter of months after its implementation, outperforming the established search engines of the time such as AltaVista and Yahoo. The algorithm is based on the theory of Markov chains, utilizing the information on the number of links leading to an existing webpage. 

 

The current problem uses a simplified web in order to show how the algorithm works (see reference [3]). The simplified web consists of 5 pages only, and it is shown schematically in Figure 2.1.

 



 

Figure 2.1: A five-page web
 

A Markov chain model of an impartial surfer visiting web pages is created as following: 

•        At time k the surfer is on page X , where X∈{A B C D E,        ,           ,           , }. 

•        At time (k+1) the surfer will move randomly to another page Y , which must be linked to X . 

•        The state Sk of the Markov chain at time k is defined as the page which the surfer is visiting at time k :  Sk ∈{A B C D E, , , , }. 

•        To create the Markov chain model, the algorithm assumes that all pages that can be reached from X have the same probability to be visited by the surfer. Thus: 

 

                                                                               1



(Total number of links leaving page X)



Prob (Sk+1 =Y S| k =X)=

                                                                              0
,

,
if  can be reached from Y   X

 

if  cannot be reached from Y  X




•        For example: Prob (Sk+1 =A S C| k = =)   ;  Prob (Sk+1 =E S| k = =D)   0 ; etc.

 

•        Based on these probabilities the State Transition Matrix ( P ) of the Markov chain is constructed. The left eigenvector w of P corresponding to the eigenvalue λ=1 (which is a fixed vector of the Markov chain) is computed from: wP w=

•        The PageRank algorithm uses the vector w to rank the pages of the five-page web in terms of visiting importance.

 

To complete this problem follow the steps below:

1. Create the 5 5× State Transition Matrix P with the values of the transition probabilities 2. Assume that initially all pages are equally likely to be visited, i.e. use  the initial state

probability distribution vector: v1 =[A B C D E]= 1 1 1 1 15 5 5 5 5

3.     Calculate the probability vectors  for  n=1,2,K20 steps, using the State Transition matrix only. Note that you ARE NOT asked to run the complete simulation of the chain. You are only asked to use the State Transition matrix approach to calculate the probability vectors for n steps. 

4.     Rank the pages {A B C D E, , , , }in order of importance based on the results from the previous step

5.     Create a plot of the calculated state probabilities for each of the five states {A B C D E,    ,           ,           , } vs.

the number of steps for n=1,2,K20 . This should be similar to the plot in  Figure 0.6

6.     Assume that the surfer always starts at his/her home page, which for the purposes of this problem is page E . Repeat steps (2)-(5) with the new state probability distribution vector:  v2 =[A B C D E]=[0 0 0 0 1]

 
 The report should contain:

•        The 5 5× State Transition Matrix P

•        For the initial state probability distribution vectorv1:  

v  Submit the page ranking calculated in Step 4. Use Table 1 for your answer. Note: You will need to replicate the table in your Word file, in order to provide the answer in your report. Points will be taken off if you do not use the table.  v Submit the plot created in Step 5. 

•        For the initial state probability distribution vectorv2 

v  Submit the page ranking calculated in Step 4. Use Table 1 for your answer. Note: You will need to replicate the table in your Word file, in order to provide the answer in your report. Points will be taken off if you do not use the table. v Submit the plot created in Step 5. 

•        The code in an Appendix.

•        Make sure the plots are properly labeled

  

       

Initial probability vector: v1
 

 

 

 

 

 

 

 
Initial probability vector: v2
Rank
Page
Probability
Rank
Page
Probability
 
 
 
 
 
 
1
 
 
1
 
 
2
 
 
2
 
 
3
 
 
3
 
 
4
 
 
4
 
 
5
 
 
5
 
 
 

Table 1. 

 

References

 

[1]   "Introduction to Probability", C.M. Grinstead and J.L Snell, American Mathematical Society, electronic version, 2003, 2nd edition

[2]   L. Page, S. Brin, R. Motwani, and T. Winograd.  "The PageRank Citation Ranking:

Bringing Order to the Web", Technical Report, Stanford University, 1998.  

[3]   "Mathematics & Technology", C. Rousseau & Y. Saint-Aubin, Springer, 2008. 

             


 



0.    Introduction and Background Material
 

1.1. A simple example: Drunkard’s walk

 

In this project you are required to simulate a Markov chain based on the “Drunkard’s Walk”. An example of the chain is shown in Figure 0.1 below, as given in Ch 11 of  the textbook by Grinstead & Snell. 

Figure 0.2 shows a single-run simulation of the Drunkard’s Walk. In the figure the walk is shown to start at state 2 and is reaching state 0 at step 8. When it reaches state 0 it will stay there afterwards, since state 0 is an absorbing state. 

Figure 0.3 shows a single-run simulation, in which the chain starts at state 3, and it is eventually absorbed by state 4, the other absorbing state of the system. 



 

                              Figure 0.2                                                                               Figure 0.3

             

 

PROBLEM 3.  Simulate a five-state absorbing Markov chain 
 

Simulate the Markov chain defined given by the probabilities shown in Figure 1.1 below.

 



 

Figure 1.1
 

3.     Define the State Transition Matrix P

4.     Create a single-run simulation of the chain for n=15 time steps. The chain can start at a random transient state, but it should get absorbed by state 0. Provide the results in a figure similar to Figure 0.2. Your figure must be properly labeled.

5.     Create a single-run simulation of the chain for n=15 time steps. The chain can start at a random transient state, but it should get absorbed by state 4. Provide the results in a figure similar to Figure 0.3. Your figure must be properly labeled.

 

PROBLEM 4.  Compute the probability of absorption using the simulated chain
 

1.     Run the single simulation of the chain for N =10,000 times, using  the following probabilities for the initial state:  [0.0 0.0 1.0 0.0 0.0], i.e. the initial state in always state 2.

2.     After the 10,000 runs are completed, count how many times the chain ended at state 0 and how many times the state ended at state 4, when the starting state is state 2. Based on these counts compute the probabilities of absorption b20 and b24.

3.     Use the format of Table 1 below to report the probabilities of absorption on your report. Points will be taken off, if the given format is not used.

 

Absorption probabilities (via simulations)
b20
 
b24
 
 

 

Table 1.

 

More products