Git Product home page Git Product logo

Comments (14)

cbravo93 avatar cbravo93 commented on September 28, 2024 2

I am working with a synthetic data set here. It comes from biological data, but the idea is that we sample 20 'documents' from 5 very long documents, ending with 100 documents. Of the 5 documents, 2 are of one type and the other 3 are from another. We expect to have a topic representing each document of origin, at least two topics representing the type of the documents and at least a generally shared topic; so around ~10 topics in total. A quick check is to use the doc_topic distributions for clustering and dimensionality reduction (e.g. UMAP). This works quite nicely with the model with 10 topics.

image

Also, other measurements also point to ~10 topics as the optimal. For example in the plot below you can see Cao_Juan and Arun from tmtoolkit and the inverted log likelihood with the new formula (so all should be minimized).

image

I have also used 'real' data and results are always the same, the R package log likelihood increases with the number of topics and peaks around where expected (and biologically topics make a lot of sense); while the current formula here always decreases (here not inverted, so the maximum should be the 'best'):

image

I did not look deeply for the difference between the codes, but there should be something. When comparing models with the same settings and different model selection criterias results are quite similar between both implementations; and the behaviour of the loglikelihood over iterations looks correct (increasing). Maybe some constant that is substracted more times with the number of topics? I will take a deeper look!

(Also I am happy to share the synthetic data if it can be useful, and the more tests the better :))

from lda.

internaut avatar internaut commented on September 28, 2024

You should probably compare how the log-likelihood (LL) is calculated in the R package and in this package (see https://github.com/lda-project/lda/blob/develop/lda/_lda.pyx#L81).

Btw, if you want to find out the "optimal" number of topics, you probably shouldn't (only) rely on the LL. There are several other metrics which may provide better results in "real world" datasets. See for example: https://tmtoolkit.readthedocs.io/en/latest/topic_modeling.html#Evaluation-of-topic-models

PS: Shouldn't the LL be monotonically increasing with the number of iterations? It's a bit odd that in the R package the LL is decreasing after it reached a peak in the beginning.

from lda.

cbravo93 avatar cbravo93 commented on September 28, 2024

Thanks for your reply! Using as metrics Arun_2010, Cao_Juan_2009 and Minmo_2011 (as named in tmtoolkit) I also get 10 topics to be the best as I expected. In regards to the LL, normally I see a maximum around the optimal topics with other implementations and then it decreases (as in Griffiths & Steyvers (https://www.pnas.org/content/pnas/101/suppl_1/5228.full.pdf) for example); rather than monotonically decreasing with the number of topics. I will look deeper in the code to see if I can figure out why here is different.

image

from lda.

internaut avatar internaut commented on September 28, 2024

I think you misread my "PS" note. I said "monotonically increasing with the number of iterations", not number of topics! This is what I found odd in your second picture.

from lda.

cbravo93 avatar cbravo93 commented on September 28, 2024

Hi!

Sorry if I wasn't clear! I didn't comment on that, it is from the other package and I have no explanation for that, but from Fig 2 in Griffiths and Steyvers seems that it can happen. It doesn't happen in all cases either, this another example on a real data set:

image

Regards to this package, what I find weird is that the likelihood decreases with the number of topics. I also found a related issue on Google groups (https://groups.google.com/forum/#!topic/lda-users/P7sYtsj8b6E); but it does not seem a parameter/data issue to me.

from lda.

internaut avatar internaut commented on September 28, 2024

Thanks for clearing this up!

Would be interested to hear if you find a difference in the LL estimation code, which may explain this behavior.

from lda.

cbravo93 avatar cbravo93 commented on September 28, 2024

Hi!

I found the code they use for the LL (https://github.com/slycoder/R-lda/blob/master/src/gibbs.c):

    double const_prior = 0;
    double const_ll = 0;

    if (compute_log_likelihood) {
      //                log B(\alpha)
      const_prior = (K * lgammafn(alpha) - lgammafn(alpha * K)) * nd;
      //                log B(\eta)
      const_ll = (V * lgammafn(eta) - lgammafn(eta * V)) * K;
    } 

    /...More code.../

    /*Compute the likelihoods:*/
    if (compute_log_likelihood) {
      double doc_ll = 0;
      for (dd = 0; dd < nd; ++dd) {
        double sum = alpha * K;
        for (kk = 0; kk < K; ++kk) {
          doc_ll += lgammafn(INTEGER(document_sums)[K * dd + kk] + alpha);
          sum += INTEGER(document_sums)[K * dd + kk];
        }
        doc_ll -= lgammafn(sum);
      }
      double topic_ll = 0;
      for (kk = 0; kk < K; ++kk) {
        double sum = eta * V;
        for (ii = 0; ii < V; ++ii) {
          topic_ll += lgammafn(INTEGER(topics)[kk + K * ii] + eta);
          sum += INTEGER(topics)[kk + K * ii];
        }
        topic_ll -= lgammafn(sum);
      }
      if (trace >= 2) {
        REprintf("ll: %g + %g - %g - %g = %g\n", doc_ll, topic_ll, const_ll, const_prior,
            doc_ll + topic_ll - const_ll - const_prior);
      }
      REAL(log_likelihood)[2 * iteration] = doc_ll - const_prior + topic_ll - const_ll;
      REAL(log_likelihood)[2 * iteration + 1] = topic_ll - const_ll;
    }
 

from lda.

cbravo93 avatar cbravo93 commented on September 28, 2024

My attempt to convert this to cython (not tested):

  cpdef double _loglikelihood(int[:, :] nzw, int[:, :] ndz, double alpha, double eta) nogil:
    cdef int k, d
    cdef int D = ndz.shape[0]
    cdef int n_topics = ndz.shape[1]
    cdef int vocab_size = nzw.shape[1]

    cdef double ll = 0
    cdef double const_prior = 0
    cdef double const_ll = 0
    
    const_prior = (n_topics * lgamma(alpha) - lgamma(alpha*n_topics)) * D
    const_ll = (vocab_size*lgamma(eta) - lgamma(eta*vocab_size)) * n_topics

    # calculate log p(w|z)
    
    with nogil:
    	cdef double topic_ll = 0
        for k in range(n_topics):
            cdef double sum = eta*vocab_size
            for w in range(vocab_size):
                # if nzw[k, w] == 0 addition and subtraction cancel out
                if nzw[k, w] > 0:
                    topic_ll=lgamma(nzw[k, w]+eta)
                    sum += nzw[k, w]
            topic_ll -= lgamma(sum)
            

        # calculate log p(z)
        cdef double doc_ll = 0
        for d in range(D):
            cdef double sum = alpha*n_topics
            for k in range(n_topics):
                if ndz[d, k] > 0:
                    doc_ll=lgamma(ndz[d, k] + alpha)
                    sum += ndz[d, k]
            doc_ll -= lgamma(sum)
            
        ll=doc_ll-const_prior+topic_ll-const_ll
            
        return ll

from lda.

cbravo93 avatar cbravo93 commented on September 28, 2024

I have done a quick try calculating the loglikelihood in the last iteration with this formula, and I get the expected results (y: log-likelihood in the last iteration, x topics):

image

For this I adapted the function to python:

import math
def loglikelihood(nzw, ndz, alpha, eta):
    D = ndz.shape[0]
    n_topics = ndz.shape[1]
    vocab_size = nzw.shape[1]

    ll = 0
    const_prior = 0
    const_ll = 0
    
    const_prior = (n_topics * math.lgamma(alpha) - math.lgamma(alpha*n_topics)) * D
    const_ll = (vocab_size* math.lgamma(eta) - math.lgamma(eta*vocab_size)) * n_topics

    # calculate log p(w|z)
    
    topic_ll = 0
    for k in range(n_topics):
        sum = eta*vocab_size
        for w in range(vocab_size):
            if nzw[k, w] > 0:
                topic_ll=math.lgamma(nzw[k, w]+eta)
                sum += nzw[k, w]
        topic_ll -= math.lgamma(sum)
            

    # calculate log p(z)
    doc_ll = 0
    for d in range(D):
        sum = alpha*n_topics
        for k in range(n_topics):
            if ndz[d, k] > 0:
                doc_ll=math.lgamma(ndz[d, k] + alpha)
                sum += ndz[d, k]
        doc_ll -= math.lgamma(sum)
            
    ll=doc_ll-const_prior+topic_ll-const_ll
            
    return ll

Same plot, but using the log-likelihood in the last iteration values as calculated in this package with the current formula. Here likelihood decreases with the number of topics:

image

I have not been able to spot where the difference is (I used exactly the same models for both plots, as calculated from this package), apart that they calculate nd[d] within the function instead of giving it as argument.

Let me know if you are able to spot the difference and decide which one is the correct approach, and whether you are planning to update the log-likelihood calculation in this package.

Thanks for looking into it!

C

from lda.

internaut avatar internaut commented on September 28, 2024

Thanks, this looks promising! However, I'm not the author of the lda package so if you want to get this integrated into lda, you should make a PR and maybe @ariddell will accept the changes. Unfortunately though, it seems to me that development of this package has stalled since the end of last year. I therefore created a fork at https://github.com/WZBSocialScienceCenter/lda (mainly because I needed Python 3.8 binary wheels). You may also make a PR there. I didn't have the chance to review the code yet. Anyway I think it should be converted to Cython but I could also do that. I'll be on vacation soon, but at the start of September I could have a look at it.

from lda.

ariddell avatar ariddell commented on September 28, 2024

Comparing loglikelihoods from models using different numbers of topics is a bit tricky because the models are very different (each additional topic adds a huge number of parameters). It's less tricky if you are calculating the probability of held-out data.

There could, of course, also be some sort of typo in my code. This sort of thing happens all the time.

re: wheels, it is my intention to release new wheels for new versions of Python. I'll try to do this soon.

from lda.

cbravo93 avatar cbravo93 commented on September 28, 2024

I have submitted the PR adding the function in cython, it works properly. @internaut I can also submit it in your repo if you also want it there.

Thanks both :)!

from lda.

ariddell avatar ariddell commented on September 28, 2024

I have not been able to spot where the difference

I'm a little concerned by this. How do we know which is correct?

The loglikelihood on a synthetic dataset should be easy to verify by hand. If there's no test which does this, one should be added. I think it makes sense to start there. How does that sound?

from lda.

stale avatar stale commented on September 28, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

from lda.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.