Git Product home page Git Product logo

sebischair / lbl2vec Goto Github PK

View Code? Open in Web Editor NEW
171.0 6.0 27.0 14.04 MB

Lbl2Vec learns jointly embedded label, document and word vectors to retrieve documents with predefined topics from an unlabeled document corpus.

Home Page: https://wwwmatthes.in.tum.de/pages/naimi84squl1/Lbl2Vec-An-Embedding-based-Approach-for-Unsupervised-Document-Retrieval-on-Predefined-Topics

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%
natural-language-processing word-embeddings document-embeddings label-embeddings unsupervised-classification nlp machine-learning text-classification python unsupervised-document-retrieval

lbl2vec's Introduction

Python version Pypi Build License Documentation Status DOI:10.5220/0010710300003058 PWC PWC

Lbl2Vec

Lbl2Vec is an algorithm for unsupervised document classification and unsupervised document retrieval. It automatically generates jointly embedded label, document and word vectors and returns documents of topics modeled by manually predefined keywords. This package includes two different model types. The plain Lbl2Vec model uses Doc2Vec, whereas Lbl2TransformerVec uses transformer-based language models to create the embeddings. Once you train a model you can:

  • Classify documents as related to one of the predefined topics.
  • Get similarity scores for documents to each predefined topic.
  • Get most similar predefined topic of documents.

See the papers presenting Lbl2Vec [1,2] and Lbl2TransformerVec [3] for more details on how it works.

Corresponding Medium post describing the use of Lbl2Vec for unsupervised text classification can be found here.

A Medium post evaluating Lbl2Vec and Lbl2TransformerVec against zero-shot approaches can be found here.

Benefits

  1. No need to label the whole document dataset for classification.
  2. No stop word lists required.
  3. No need for stemming/lemmatization.
  4. Works on short text.
  5. Creates jointly embedded label, document, and word vectors.

Table of Contents

  1. How does it work?
  2. Installation
  3. Usage
    1. Model Training
      1. Lbl2Vec
      2. Lbl2TransformerVec
    2. Document prediction
    3. Save and load models
  4. Citation information

How does it work?

Back to Table of Contents

The key idea of the algorithm is that many semantically similar keywords can represent a topic. In the first step, the algorithm creates a joint embedding of document and word vectors. Once documents and words are embedded in a vector space, the goal of the algorithm is to learn label vectors from previously manually defined keywords representing a topic. Finally, the algorithm can predict the affiliation of documents to topics from document vector <-> label vector similarities.

The Algorithm

0. Use the manually defined keywords for each topic of interest.

Domain knowledge is needed to define keywords that describe topics and are semantically similar to each other within the topics.

Basketball Soccer Baseball
NBA FIFA MLB
Basketball Soccer Baseball
LeBron Messi Ruth
... ... ...

1. Create jointly embedded document and word vectors using Doc2Vec , Sentence-Transformers, or SimCSE.

Documents will be placed close to other similar documents and close to the most distinguishing words.

2. Find document vectors that are similar to the keyword vectors of each topic.

Each color represents a different topic described by the respective keywords.

3. Clean outlier document vectors for each topic.

Red documents are outlier vectors that are removed and do not get used for calculating the label vector.

4. Compute the centroid of the outlier cleaned document vectors as label vector for each topic.

Points represent the label vectors of the respective topics.

5. Compute label vector <-> document vector similarities for each label vector and document vector in the dataset.

Documents are classified as topic with the highest label vector <-> document vector similarity.

Installation

Back to Table of Contents

pip install lbl2vec

Usage

Back to Table of Contents

For detailed information visit the API Guide and the examples.

Model Training

Lbl2Vec model

Back to Table of Contents

Lbl2Vec learns word vectors, document vectors and label vectors using Doc2Vec during training.

Train new Lbl2Vec model from scratch using Doc2Vec

from lbl2vec import Lbl2Vec

# init model
model = Lbl2Vec(keywords_list=descriptive_keywords, tagged_documents=tagged_docs)

# train model
model.fit()

Important parameters:

  • keywords_list: iterable list of lists with descriptive keywords of type str. For each label at least one descriptive keyword has to be added as list of str.
  • tagged_documents: iterable list of gensim.models.doc2vec.TaggedDocument elements. If you wish to train a new Doc2Vec model this parameter can not be None, whereas the doc2vec_model parameter must be None. If you use a pretrained Doc2Vec model this parameter has to be None. Input corpus, can be simply a list of elements, but for larger corpora, consider an iterable that streams the documents directly from disk/network.

Use word and document vectors from pretrained Doc2Vec model

Uses word vectors and document vectors from a pretrained Doc2Vec model to learn label vectors during Lbl2Vec model training.

from lbl2vec import Lbl2Vec

# init model
model = Lbl2Vec(keywords_list=descriptive_keywords, doc2vec_model=pretrained_d2v_model)

# train model
model.fit()

Important parameters:

  • keywords_list: iterable list of lists with descriptive keywords of type str. For each label at least one descriptive keyword has to be added as list of str.
  • doc2vec_model: pretrained gensim.models.doc2vec.Doc2Vec model. If given a pretrained Doc2Vec model, Lbl2Vec uses the pre-trained Doc2Vec model from this parameter. If this parameter is defined, tagged_documents parameter has to be None. In order to get optimal Lbl2Vec results the given Doc2Vec model should be trained with the parameters "dbow_words=1" and "dm=0".

Lbl2TransformerVec model

Back to Table of Contents

Lbl2TransformerVec learns word vectors, document vectors and label vectors using transformer-based language models during training. Using state-of-the-art transformer embeddings may not only yield to better predictions but also eliminates the issue of unknown keywords during model training. While the Doc2Vec-based model can only use keywords that Lbl2Vec has seen during training, the transformer-based Lbl2TransformerVec model can learn label vectors from any set of keywords. That is because transformer vocabularies consist of individual characters, subwords, and words, allowing transformers to effectively represent every word in a sentence. This eliminates the out-of-vocabulary scenario. However, using transformers instead of Doc2Vec is much more computationally expensive, especially if no GPU is available.

Train new Lbl2TransformerVec model from scratch using the default transformer-embedding model

from lbl2vec import Lbl2TransformerVec

# init model using the default transformer-embedding model ("sentence-transformers/all-MiniLM-L6-v2")
model = Lbl2TransformerVec(keywords_list=listdescriptive_keywords, documents=document_list)

# train model
model.fit()

Train Lbl2TransformerVec model using an arbitrary Sentence-Transformers embedding model

from lbl2vec import Lbl2TransformerVec
from sentence_transformers import SentenceTransformer

# select sentence-tranformers model
transformer_model = SentenceTransformer("all-mpnet-base-v2")

# init model
model = Lbl2TransformerVec(transformer_model=transformer_model, keywords_list=listdescriptive_keywords,
                           documents=document_list)

# train model
model.fit()

Train Lbl2TransformerVec model using an arbitrary SimCSE embedding model

from lbl2vec import Lbl2TransformerVec
from transformers import AutoModel

# select SimCSE model
transformer_model = AutoModel.from_pretrained("princeton-nlp/sup-simcse-roberta-base")

# init model
model = Lbl2TransformerVec(transformer_model=transformer_model, keywords_list=listdescriptive_keywords,
                           documents=document_list)

# train model
model.fit()

Important parameters:

  • keywords_list: iterable list of lists with descriptive keywords of type str. For each label at least one descriptive keyword has to be added as list of str.
  • documents: iterable list of text document elements (strings).
  • transformer_model: Transformer model used to embed the labels, documents and keywords. The embedding models must be either of type sentence_transformers.SentenceTransformer or transformers.AutoModel

Document prediction

Back to Table of Contents

The prediction API calls are the same for Lbl2Vec and Lbl2TransformerVec.

Predict label similarities for documents used for training

Computes the similarity scores for each document vector stored in the model to each of the label vectors.

# get similarity scores from trained model
model.predict_model_docs()

Important parameters:

  • doc_keys: list of document keys (optional). If None: return the similarity scores for all documents that are used to train the Lbl2Vec model. Else: only return the similarity scores of training documents with the given keys.

Predict label similarities for new documents that are not used for training

Computes the similarity scores for each given and previously unknown document vector to each of the label vectors from the model.

# get similarity scores for each new document from trained model
model.predict_new_docs(tagged_docs=tagged_docs)

Important parameters:

Save and load models

Back to Table of Contents

The save and load API calls are the same for Lbl2Vec and Lbl2TransformerVec.

Save model to disk

model.save('model_name')

Load model from disk

model = Lbl2Vec.load('model_name')

Citation information

Back to Table of Contents

When citing Lbl2Vec [1,2] or Lbl2TransformerVec [3] in academic papers and theses, please use the following BibTeX entries:

@conference{schopf_etal_webist21,
author={Tim Schopf and Daniel Braun and Florian Matthes},
title={Lbl2Vec: An Embedding-based Approach for Unsupervised Document Retrieval on Predefined Topics},
booktitle={Proceedings of the 17th International Conference on Web Information Systems and Technologies - WEBIST,},
year={2021},
pages={124-132},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010710300003058},
isbn={978-989-758-536-4},
issn={2184-3252},
}
@InProceedings{10.1007/978-3-031-24197-0_4,
author="Schopf, Tim
and Braun, Daniel
and Matthes, Florian",
editor="Marchiori, Massimo
and Dom{\'i}nguez Mayo, Francisco Jos{\'e}
and Filipe, Joaquim",
title="Semantic Label Representations with Lbl2Vec: A Similarity-Based Approach for Unsupervised Text Classification",
booktitle="Web Information Systems and Technologies",
year="2023",
publisher="Springer International Publishing",
address="Cham",
pages="59--73",
abstract="In this paper, we evaluate the Lbl2Vec approach for unsupervised text document classification. Lbl2Vec requires only a small number of keywords describing the respective classes to create semantic label representations. For classification, Lbl2Vec uses cosine similarities between label and document representations, but no annotation information. We show that Lbl2Vec significantly outperforms common unsupervised text classification approaches and a widely used zero-shot text classification approach. Furthermore, we show that using more precise keywords can significantly improve the classification results of similarity-based text classification approaches.",
isbn="978-3-031-24197-0"
}
@inproceedings{schopf_etal_nlpir22,
author = {Schopf, Tim and Braun, Daniel and Matthes, Florian},
title = {Evaluating Unsupervised Text Classification: Zero-shot and Similarity-based Approaches},
year = {2023},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {2022 6th International Conference on Natural Language Processing and Information Retrieval (NLPIR)},
keywords = {Natural Language Processing, Unsupervised Text Classification, Zero-shot Text Classification},
location = {Bangkok, Thailand},
series = {NLPIR 2022}
}

lbl2vec's People

Contributors

timschopf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

lbl2vec's Issues

Multilingual?

Does this model work for languages other than English?
If yes, could you please specify which ones?

Is it possible to use 2 words as keywords

Is it possible to use keywords that are composed of 2 words each? For example 'movie theater' would be a useful keyword if I wanted to find documents about movie theaters, but the individual words movie and theater would identify a different subset of documents than what I'm really after

Localization possible ?

Does Lbl2Vec work with other languages than English, as in does it create the doc2vec correctly when using it on other languages ?

Lbl2TransformerVec(Lbl2Vec).predict_model_docs() stalls / lack of GPU utilization

It appears that on larger label datasets (>1000 labels), Lbl2TransformerVec(Lbl2Vec).predict_model_docs() will stall at the "calculate document vector <-> label vector similarities" step, perhaps due to a memory issue. Tracing the issue, it may be due to the below "utils.top_similar_vectors" function which converts the Torch tensors to numpy, which is called on in an apply function with predict_model_docs(). Would there be a way to refactor the below to perhaps leave the torch tensors in GPU and then convert to numpy outside of this function to improve performance?

The issue only seems to appear with label counts >1000.

utils.py

def top_similar_vectors(key_vector: np.array, candidate_vectors: List[np.array]) -> List[tuple]:
'''
 Calculates the cosines similarities of a given key vector to a list of candidate vectors.
 Parameters
 ----------
 key_vector : `np.array`_
         The key embedding vector

 candidate_vectors : List[`np.array`_]
         A list of candidate embedding vectors
 Returns
 -------
 top_results : List[tuples]
      A descending sorted of tuples of (cos_similarity, list_idx) by cosine similarities for each candidate vector in the list
 '''

cos_scores = util.cos_sim(key_vector, np.asarray(candidate_vectors))[0]
top_results = torch.topk(cos_scores, k=len(candidate_vectors))
## Return the tensors then convert to numpy

## Consider refactoring implementation to leave tensors in GPU instead of move to CPU at this point
top_cos_scores = top_results[0].detach().cpu().numpy()
top_indices = top_results[1].detach().cpu().numpy()

return list(zip(top_cos_scores, top_indices))

multiclass multilabel classification

Hi team,

I have a couple of questions about multiclass multilabel classification.

  1. do I need to create keywords list for each class?
  2. by setting up threshold, does that mean it can use for multilabel classification? i.e. any class above the threshold is a match, so, one data can have multiple label?

Thanks,
Ling

Is paragraph classification possible?

Hello and thanks for sharing this. A question: can Lbl2Vec perform well when the "documents" are paragraph-sized? For example 3-5 sentences? Would we need to change Doc2Vec that Lbl2Vec currently uses into Sent2Vec or some other equivalent? Your thoughts?

Thanks!

pip install doesnt work

Hello
I'm trying to install the package but I get an error.

pip install lbl2vec

Collecting lbl2vec
ERROR: Could not find a version that satisfies the requirement lbl2vec (from versions: none)
ERROR: No matching distribution found for lbl2vec

I searched a bit on google and couldn't find a solution.

Python 3.7.4
pip 19.2.3

Lbl2TransformerVec - predict_model_docs() when clean_outliers=True creates Dimension out of range

When calling model.predict_model_docs() with the clean_outliers=True , model.predict_model_docs() produces an "IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)" error. If clean_outliers=False, there is no issues with predict_model_docs(). It appears that clean_outliers() is creating a dimension mismatch?

model = lbl2vec.Lbl2TransformerVec(transformer_model=transformer_model_loop, label_names=labels, keywords_list=keys,
                               documents=df['name'].apply(str.lower), device=torch.device('cuda'), similarity_threshold=.5, clean_outliers=True)
model.fit()
torch.set_default_tensor_type('torch.cuda.FloatTensor')

## Produces issues with clean_outliers=True
model_out = model_loop.predict_model_docs()

Error: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Lbl2TransformerVec - INFO - Calculate document<->label similarities

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-6-c5d23d521a48> in <module>

---> 15     model_out = model.predict_model_docs()


~/.local/lib/python3.8/site-packages/lbl2vec/lbl2transformervec.py in predict_model_docs(self, doc_idxs)
    333         self.logger.info('Calculate document<->label similarities')
    334         # calculate document vector <-> label vector similarities
--> 335         labeled_docs = self._get_document_label_similarities(labeled_docs=labeled_docs, doc_key_column=doc_key_column,
    336                                                              most_similar_label_column=most_similar_label_column,
    337                                                              highest_similarity_score_column=highest_similarity_score_column)

~/.local/lib/python3.8/site-packages/lbl2vec/lbl2transformervec.py in _get_document_label_similarities(self, labeled_docs, doc_key_column, most_similar_label_column, highest_similarity_score_column)
    532         label_similarities = []
    533         for label_vector in list(self.labels['label_vector_from_docs']):
--> 534             similarities = top_similar_vectors(key_vector=label_vector, candidate_vectors=list(labeled_docs['doc_vec']))
    535             similarities.sort(key=lambda x: x[1])
    536             similarities = [elem[0] for elem in similarities]

~/.local/lib/python3.8/site-packages/lbl2vec/utils.py in top_similar_vectors(key_vector, candidate_vectors)
    178           A descending sorted of tuples of (cos_similarity, list_idx) by cosine similarities for each candidate vector in the list
    179      '''
--> 180     cos_scores = util.cos_sim(key_vector, np.asarray(candidate_vectors))[0]
    181     top_results = torch.topk(cos_scores, k=len(candidate_vectors))
    182     top_cos_scores = top_results[0].detach().cpu().numpy()

~/.local/lib/python3.8/site-packages/sentence_transformers/util.py in cos_sim(a, b)
     45         b = b.unsqueeze(0)
     46 
---> 47     a_norm = torch.nn.functional.normalize(a, p=2, dim=1)
     48     b_norm = torch.nn.functional.normalize(b, p=2, dim=1)
     49     return torch.mm(a_norm, b_norm.transpose(0, 1))

~/.local/lib/python3.8/site-packages/torch/nn/functional.py in normalize(input, p, dim, eps, out)
   4630         return handle_torch_function(normalize, (input, out), input, p=p, dim=dim, eps=eps, out=out)
   4631     if out is None:
-> 4632         denom = input.norm(p, dim, keepdim=True).clamp_min(eps).expand_as(input)
   4633         return input / denom
   4634     else:

~/.local/lib/python3.8/site-packages/torch/_tensor.py in norm(self, p, dim, keepdim, dtype)
    636                 Tensor.norm, (self,), self, p=p, dim=dim, keepdim=keepdim, dtype=dtype
    637             )
--> 638         return torch.norm(self, p, dim, keepdim, dtype=dtype)
    639 
    640     def solve(self, other):

~/.local/lib/python3.8/site-packages/torch/functional.py in norm(input, p, dim, keepdim, out, dtype)
   1527         if out is None:
   1528             if dtype is None:
-> 1529                 return _VF.norm(input, p, _dim, keepdim=keepdim)  # type: ignore[attr-defined]
   1530             else:
   1531                 return _VF.norm(input, p, _dim, keepdim=keepdim, dtype=dtype)  # type: ignore[attr-defined]

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

ValueError: cannot compute similarity with no input

Hi Team,

I am getting following error while running model fit:

2022-04-08 14:19:04,344 - Lbl2Vec - INFO - Train document and word embeddings
2022-04-08 14:19:09,992 - Lbl2Vec - INFO - Train label embeddings

ValueError Traceback (most recent call last)
in

~/SageMaker/lbl2vec/lbl2vec.py in fit(self)
248 # get doc keys and similarity scores of documents that are similar to
249 # the description keywords
--> 250 self.labels[['doc_keys', 'doc_similarity_scores']] = self.labels['description_keywords'].apply(lambda row: self._get_similar_documents(
251 self.doc2vec_model, row, num_docs=self.num_docs, similarity_threshold=self.similarity_threshold, min_num_docs=self.min_num_docs))
252

~/anaconda3/envs/python3/lib/python3.6/site-packages/pandas/core/series.py in apply(self, func, convert_dtype, args, **kwds)
4211 else:
4212 values = self.astype(object)._values
-> 4213 mapped = lib.map_infer(values, f, convert=convert_dtype)
4214
4215 if len(mapped) and isinstance(mapped[0], Series):

pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()

~/SageMaker/lbl2vec/lbl2vec.py in (row)
249 # the description keywords
250 self.labels[['doc_keys', 'doc_similarity_scores']] = self.labels['description_keywords'].apply(lambda row: self._get_similar_documents(
--> 251 self.doc2vec_model, row, num_docs=self.num_docs, similarity_threshold=self.similarity_threshold, min_num_docs=self.min_num_docs))
252
253 # validate that documents to calculate label embeddings from are found

~/SageMaker/lbl2vec/lbl2vec.py in _get_similar_documents(self, doc2vec_model, keywords, num_docs, similarity_threshold, min_num_docs)
625 for word in cleaned_keywords_list]
626 similar_docs = doc2vec_model.dv.most_similar(
--> 627 positive=keywordword_vectors, topn=num_docs)
628 except KeyError as error:
629 error.args = (

~/anaconda3/envs/python3/lib/python3.6/site-packages/gensim/models/keyedvectors.py in most_similar(self, positive, negative, topn, clip_start, clip_end, restrict_vocab, indexer)
775 all_keys.add(self.get_index(key))
776 if not mean:
--> 777 raise ValueError("cannot compute similarity with no input")
778 mean = matutils.unitvec(array(mean).mean(axis=0)).astype(REAL)
779

ValueError: cannot compute similarity with no input

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.