Git Product home page Git Product logo

Comments (10)

AdamSpannbauer avatar AdamSpannbauer commented on July 24, 2024

Restating the problem to check my understanding:
You want to input a data.frame of documents and get the top lexRanked sentence per document.

Assuming you want to stick to the tidyverse here is a possible solution using a custom function & purrr::map(). Below is taken from a gist created to address this issue; the gist includes additional code used to create the example input data.frame (my_df) & show the output.

#tidyverse solution to get top lexrank sentence per doc
library(dplyr)
library(purrr)
#convet to tibble
my_tbl = as_data_frame(my_df)

#function to get top lexranked sentence in a df
get_top_sentences = function(df_in, text_col = "text", n=1) {
  #perform piped lexrank process and extract top ranked sentence
  lex_df = lexRankr::unnest_sentences_(df_in, "sentences", text_col) %>% #parse sentences
    lexRankr::bind_lexrank(sentences, sent_id, level = "sentences") %>% #perform lexrank
    arrange(desc(lexrank)) %>% #get top ranked sentence(s)
    slice(1:n)
  return(lex_df)
}

#get top sentence(s) per document
#split into a list with document dfs as elements
top_sent_df = split(my_tbl, my_tbl$doc_id) %>% 
  #apply lexrank function to extract top n ranked sentences
  map(get_top_sentences, n=1) %>% 
  #recombine into single df
  bind_rows()

from lexrankr.

Monduiz avatar Monduiz commented on July 24, 2024

Thank you, this is showing me a way I think works like yours. Here is how I have adapted it. I just want to make sure I am getting the results I should be getting and the ranking is what it should be. This is how I see a tidy verse workflow guided by your example, hopefully, the results are valid:

library(rvest)
library(tidyverse)
library(stringr)
library(purrr)
library(lexRankr)

gm_headlines <- read_html("https://beta.theglobeandmail.com/politics/")

gm_links <- gm_headlines %>%
  html_nodes(".o-card__link") %>%
  html_attr("href") %>% 
  xml2::url_absolute("https://beta.theglobeandmail.com")

pages <- gm_links %>% map(read_html)

gm_articles <- pages %>% 
  map(. %>% 
        html_nodes(".c-article-body__text") %>% 
        html_text()
  )

gm_titles <- gm_headlines %>%
  html_nodes('.o-card__content-text') %>%
  html_text

gm <- data_frame(gm_titles, gm_links, gm_articles)

# Remove duplicates and video links
gm <- gm %>% 
  distinct(gm_titles, .keep_all = TRUE) %>% 
  filter(!str_detect(gm_links, 'video')) %>%
  mutate(doc_id = 1:length(gm_articles))


### summarization
gm_unnest <- gm %>% 
  select(doc_id, gm_articles) %>% 
  unnest(gm_articles)

gm_rank <- gm_unnest %>% unnest_sentences("sentences", gm_articles) %>% 
  bind_lexrank(sentences, sent_id, level = "sentences")

gm_rank <- gm_rank %>% group_by(doc_id) %>% 
  arrange(desc(lexrank)) %>% 
  arrange(doc_id)

gm_rank <- gm_rank %>% 
  select(doc_id, sentences, lexrank) %>% 
  group_by(doc_id) %>% 
  top_n(2, lexrank)

from lexrankr.

AdamSpannbauer avatar AdamSpannbauer commented on July 24, 2024

It depends on what you are trying to accomplish. Your code will give you highest lexRanked sentences per doc, but the lexRank itself is being calculated with respect to the whole corpus. So the sentences returned are not the most representative sentences per-document, but rather the most representative sentences of the corpus. If that is what you are trying to accomplish then your solution works.

The code I provided will return the highest lexRanked sentences per document. If this is your goal you will also benefit from a performance boost (since executing lexRank on a full corpus can be computationally expensive). Below is an extension of the gist I posted using purrr::possibly() to add some easy tryCatch logic.

safe_top_sent = purrr::possibly(get_top_sentences, otherwise = NULL, quiet = FALSE)
#get top sentence(s) per document
#split into a list with document dfs as elements
gm_rank_doc_level = split(gm_unnest, gm_unnest$doc_id) %>% 
  #apply lexrank function to extract top n ranked sentences
  map(safe_top_sent, text_col="gm_articles", n=2) %>% 
  #recombine into single df
  bind_rows()

Additionally, in your code you call unnest_sentences("sentences", gm_articles). If using unnest_sentences you don't need to quote your column names (it works fine if you do, you just don't have to). So your call could be: unnest_sentences(sentences, gm_articles). In the gist I used unnest_sentences_ which requires character column names instead of unquoted names; I did this to easily parameterize one of the inputs in the custom function.

from lexrankr.

Monduiz avatar Monduiz commented on July 24, 2024

I am trying to get the highest ranked sentences per document. I will go back to your code. I wasn't able to make it work at first and the split() line was confusing me.

from lexrankr.

AdamSpannbauer avatar AdamSpannbauer commented on July 24, 2024

I'll look into finding the best way to add this functionality to the package so the process is simpler.

from lexrankr.

Monduiz avatar Monduiz commented on July 24, 2024

I am still trying to adapt the get_top_sentences function as it has not been working for me. Thank you so much for your insights!

from lexrankr.

AdamSpannbauer avatar AdamSpannbauer commented on July 24, 2024

Your full script runs for me if I make the safe_top_sent modification in my 2nd comment. There were 2 articles that ran into errors in the lexRank process.

If the fix mentioned doesn't resolve your issue please post your error and we can work through it.

from lexrankr.

Monduiz avatar Monduiz commented on July 24, 2024

Ok, I do get an error but I think I am not using your modification correctly.

Error: All values in sentence term tfidf matrix are 0.  Similarities would return as NaN

So, I am assuming the start is after gm_unnest ?

gm_unnest <- gm %>% 
  select(doc_id, gm_articles) %>% 
  unnest(gm_articles)

From there, is the function get_top_sentences also modified in some way? then you run your modification?

from lexrankr.

AdamSpannbauer avatar AdamSpannbauer commented on July 24, 2024

I combined your code and a possible solution using the get_top_sentences() function into this gist (new code starting at line 48).

The script runs into 2 errors during the lexRank process (so we will have 2 documents missing from our results), but the tryCatch type logic allows the script to continue processing.

from lexrankr.

Monduiz avatar Monduiz commented on July 24, 2024

This is working nicely! Thank you, this is great! If you do look into simplifying this process, I would love to test it!

from lexrankr.

Related Issues (14)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.