My NLP codes <<<<<<< HEAD
Split a document into words accounting for apostrophied words, urls, emails, emojis, punctuation, stop words etc. Use of regular expressions. Lemmatization. Making a histogram of word lengths. Examining words of a particular length. Examining lexical diversity. Make a lexical dispersion plot showing where a particular words of interest appears in a document.
Split a document into sentences. Perform named entity recognition. Perform sentiment analysis.
Web scraping to build a dataset of books with titles, descriptions, genres etc. Perform EDA od data. Preprocess book descriptions if required. Use TF-IDFVectorize. to convert descriptions to a matrix. Use t-SNE to project this multidimentional data into two dimentions to help visualize all books of different genres. Use a measure of similarity such as cosine similarity to find similar books.
Use google word2vec to obtain a vectorized description of each word (includes the contextual and latent meaning of each word, so that the vectors for dog and cat, or tea and coffee, or girl and woman, are similar. Find the weighted average of all the words that describe each book, weighted by their TF-IDF's. with this new vector representation for each book, go ahead with the t-SNE visualization and cosine similarlity analysis to find similar books.
Code sourced from Siddhardhan's machine learning projects
Use of Logistic Regression on a labelled dataset from Kaggle to model and predict whether news is real or fake. High accuracy acheived.
https://www.kaggle.com/c/fake-news/data?select=train.csv