CS 510: Advanced Information Retrieval (Fall 2017)

Instructor: ChengXiang Zhai

Assignment #3: N-Gram Language Model

Notice: This assignment is due Thursday, September 21st at 11:59pm.

Please submit your solutions via Compass. You should submit your assignment as a PDF.

This assignment tests your understanding about N-Gram Language Models. In particular, you will work on specific problems related to the estimation of N-Gram Language Model parameters, the issues involved in the estimation process and ways to overcome those issues. Specially, you will deal with different kinds of smoothing techniques, including how smoothing works in practice and what are the commonalities and differences of various smoothing techniques.

1. N-Gram Language Model [15 pts]

  1. [5 pts] What is the central assumption regarding word dependencies that is made in N-Gram Language Models?

  2. [5 pts] Do you think the assumption made in N-Gram Language Models is reasonable? Why?

  3. [5 pts] What is the primary benefit of applying the assumption made in N-Gram Language Models?

2. Unigram Language Model [15 pts]

Unigram Language Model is a special class of N-Gram Language Model where the next word in the document is assumed to be independent of the previous words generated by the model. Mathematically, this is written as, P(wmwm1,...,w1)=P(wm)P(w_m|w_{m-1},...,w_1)=P(w_m).

  1. [10 pts] We can estimate the parameters of a Unigram Language Model through Maximum Likelihood Estimation. Given a particular document dd and the vocabulary set VV, the maximum likelihood estimator for a Unigram Language Model is given by the following formula:
    PML(wd)=c(w,d)wVc(w,d)\displaystyle P_{ML}(w|d)=\frac{c(w,d)}{\sum_{w'\in V} c(w',d)}
    Now, let us assume that, we have seen the following document d: “the sun rises in the east and sets in the west”. Assuming that this document was generated by a Unigram Language Model and words in the document dd constitute the entire vocabulary, how many parameters are necessary to specify the Unigram Language Model? Estimate the values of all these parameters using the maximum likelihood estimator.
  2. [5 pts] Now assume that, the entire vocabulary VV consists of the set:
    {a,the,from,retrieval,sun,rises,in,BM25,east,sets,and,west}\displaystyle \{a,the,from,retrieval,sun,rises,in,BM25,east,sets,and,west\}

    If we consider the same document dd: “the sun rises in the east and sets in the west” and assume again that this document was generated by a Unigram Language Model, how many parameters are necessary to specify the Unigram Language Model in this case? Estimate the values of all these parameters using the maximum likelihood estimator.

3. Bigram Language Model [15 pts]

Bigram Language Model is another special class of N-Gram Language Model where the next word in the document depends only on the immediate preceding word. Mathematically, this is written as the conditional probability, P(wmwm1,...,w1)=P(wmwm1)P(w_m|w_{m-1},...,w_1)=P(w_m|w_{m-1}).

  1. [8 pts] Given the same document dd from question 2(a) and same vocabulary set VV from question 2(b) and assuming the document dd is now generated by a Bigram Language Model, how many parameters are necessary to specify the Model? Using the maximum likelihood estimator, estimate the values of the following parameters (assume # to be the start of the sentence marker):

    • P(sunthe)P(sun|the)

    • P(thein)P(the|in)

    • P(fromthe)P(from|the)

    • P(BM25retrieval)P(BM25|retrieval)

    • P(eastwest)P(east|west)

    • P(eastrises)P(east|rises)

    • P(sets#)P(sets|\#)

    • P(the#)P(the|\#)

  2. [7 pts] Please provide answers to the following questions:

    • Do you see any general problem associated with the estimation of the parameters of the Bigram Language Model from problem 3(a)?

    • Do you see the same problem in the estimation process for question 2(b)?

    • For which model, the problem is more severe? Can you derive some general conclusion based on this comparison?

    • What can we do to solve this general problem?

4. Smoothing [15 pts]

  1. [10 pts] Write down the formula for Dirichlet Prior Smoothing. Then, mathematically prove the following two lemmas:

    • Show, in the limit where document length tends to infinity, that a unigram language model smoothed with a Dirichlet prior becomes equivalent to one estimated using the maximum likelihood estimate.

    • Show, in the limit where the parameter μ\mu tends to infinity, that a unigram language model smoothed with a Dirichlet prior becomes equivalent to the background language model used in the smoothing.

  2. [5 pts] Point out one advantage of Jelinek-Mercer smoothing over Katz-Backoff smoothing. Explain why.

5. Application of Smoothing [40 pts]

Again, Consider the document dd: “the sun rises in the east and sets in the west”. This time, assume that we have a background word distribution (pre-computed somehow) denoted by REFREF which is characterized as follows:

  1. [10 pts] Assume document dd is generated by a Unigram Language Model. Estimate the parameters of the Unigram Language Model using Dirichlet Prior Smoothing assuming μ=4\mu=4. Now, compare this result against the results obtained from 2(b).

  2. [10 pts] Repeat problem 5(a) assuming μ=0.01\mu=0.01 and μ=100\mu=100. Compare these results with results from problem 5(a). Do the results match with your intuition? explain why.

  3. [20 pts] Repeat problem 5(a) with Jelinek-Mercer smoothing instead of Dirichlet Prior Smoothing assuming λ={0.01,0.5,0.9}\lambda=\{0.01, 0.5, 0.9\} and compare the results obtained for different λ\lambda’s. Also, compare these results with results from problem 5(a) and 5(b). What similarities or differences do you observe?