Git Product home page Git Product logo

Comments (9)

mrchypark avatar mrchypark commented on May 30, 2024

생성될때 마다 같은 내용이라고 하시면 다른 이름의 파일이 계속 생기지만 내용이 같다는 말씀이신가요?

from n2h4.

cogud0908 avatar cogud0908 commented on May 30, 2024

네 그리고 이전 날짜의 데이터를 크롤링하고 싶은데 현재 날짜를 기준으로 계속 최신뉴스만 받아오는 것 같습니다...

from n2h4.

mrchypark avatar mrchypark commented on May 30, 2024

확인해보겠습니다. 감사합니다.

from n2h4.

cogud0908 avatar cogud0908 commented on May 30, 2024

감사합니다

from n2h4.

cogud0908 avatar cogud0908 commented on May 30, 2024

검색어가 띄어쓰기를 지원하지 않네요 그래서 그런가봐요

from n2h4.

mrchypark avatar mrchypark commented on May 30, 2024

혹시 실행하신 코드 전체를 보여주실 수 있으신가요?

from n2h4.

cogud0908 avatar cogud0908 commented on May 30, 2024

install.packages("selectr")
install.packages("xml2")
library(curl)
library(rvest)

if (!require("devtools")) install.packages("devtools")
devtools::install_github("forkonlp/N2H4")
library(N2H4)

options(stringsAsFactors = F)

success <- function(res){
cat("Request done! Status:", res$status, "\n")
#res$content<-iconv(rawToChar(res$content),from="CP949",to="UTF-8")
res$content<-rawToChar(res$content)
data <<- c(data, list(res))
}
failure <- function(msg){
cat("Oh noes! Request failed!", msg, "\n")
}

strDate<-as.Date("2001-01-02")
endDate<-as.Date("2005-12-31")
strTime<-Sys.time()
midTime<-Sys.time()

qlist<-c("노인자살")
for (i in 1:length(qlist)){
dir.create("./data",showWarnings=F)
dir.create(paste0("./data/news_",qlist[i]),showWarnings=F)

for (date in strDate:endDate){
date<-as.character(as.Date(date,origin = "1970-01-01"))
dateo<-gsub("-",".",date)
dated<-gsub("-","",date)
print(paste0(date," / ",qlist[i], "/ start Time: ", strTime," // spent Time at first: ", Sys.time()-strTime))
midTime<-Sys.time()
pageUrli<-paste0("https://search.naver.com/search.naver?where=news&query=",qlist[i],"&ie=utf8&sm=tab_srt&sort=0&photo=0&field=0&reporter_article=&pd=3&ds=",dateo,"&de=",dateo,"&docid=&nso=so%3Ar%2Cp%3Afrom",dated,"to",dated,"%2Ca%3Aall&mynews=0&mson=0&refresh_start=0&related=0")
trym<-0
max<-try(getMaxPageNum(pageUrli, search=T), silent = T)
while(trym<=5&&class(max)=="try-error"){
max<-try(getMaxPageNum(pageUrli, search=T), silent = T)
Sys.sleep(abs(rnorm(1)))
trym<-trym+1
print(paste0("try again max num: ",pageUrli))
}
if(max=="no result"){
print("no naver news links this time")
next
}
for (pageNum in 1:max){
start<-(pageNum-1)*10+1
print(paste0(date," / ",qlist[i], "/ start Time: ", strTime," / spent Time at first: ", Sys.time()-strTime))
midTime<-Sys.time()
pageUrl<-paste0(pageUrli,"&start=",start)
tryp<-0
newsList<-try(getUrlListByQuery(pageUrl), silent = T)
while(tryp<=5&&class(newsList)=="try-error"){
newsList<-try(getUrlListByQuery(pageUrl), silent = T)
Sys.sleep(abs(rnorm(1)))
tryp<-tryp+1
print(paste0("try again max num: ",pageUrl))
}
if(newsList$news_links[1]=="no naver news"){
print("no naver news links this time")
next
}

  pool <- new_pool()
  data <- list()
  sapply(newsList$news_links, function(x) curl_fetch_multi(x,success,failure))
  res <- multi_run()
  
  if( identical(data, list()) ){
    pool <- new_pool()
    data <- list()
    sapply(newsList$news_links, function(x) curl_fetch_multi(x,success,failure))
    res <- multi_run()
  }
  
  closeAllConnections()
  
  loc<-sapply(data, function(x) grepl("^http://news.naver",x$url))
  cont<-sapply(data, function(x) x$content)
  cont<-cont[loc]
  
  if(identical(cont,character(0))){ 
    print("no naver news links this time")
    next
  }
  
  titles<-unlist(lapply(cont,function(x) getContentTitle(read_html(x))))
  bodies<-unlist(lapply(cont,function(x) getContentBody(read_html(x))))
  presses<-unlist(lapply(cont,function(x) getContentPress(read_html(x))))
  datetime<-lapply(cont,function(x) getContentDatetime(read_html(x))[1])
  datetime<-sapply(datetime, function(x) (as.character(x)[1]))
  edittime<-lapply(cont,function(x) getContentDatetime(read_html(x))[2])
  edittime<-sapply(edittime, function(x) (as.character(x)[1]))
  
  urls<-sapply(data, function(x) x$url)
  urls<-urls[loc]
  
  datC<-data.frame(titles,urls,presses,datetime,edittime,bodies)
  
  write.csv(datC, file=paste0("./data/news_",qlist[i],"/news_",date,"_",pageNum,".csv"),row.names = F, fileEncoding="euc-kr")
  
}

}
}

from n2h4.

cogud0908 avatar cogud0908 commented on May 30, 2024

2000~2005년 자료를 가지고 싶은데 계속 오류가 뜨네요 ㅠ

from n2h4.

mrchypark avatar mrchypark commented on May 30, 2024

검색 기반의 크롤링시 getMaxPageNum()함수가 동작하기 어려운 환경이 되어서 기능 지원을 종료하였습니다.

from n2h4.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.