Git Product home page Git Product logo

rnn_recsys's Introduction

rnn_recsys

Our implementation of one research paper "Embedding-based News Recommendation for Millions of Users" https://dl.acm.org/citation.cfm?id=3098108 Shumpei Okura, Yukihiro Tagami, Shingo Ono, and Akira Tajima. 2017. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '17).

I provide a toy demo dataset to demonstrate the file format. On this dataset, model AVG has an AUC of 0.76, and model RNN has an AUC of 0.92. You can reproduce this simply by running 'python train.py' . Sorry that I cannot upload my own real-world dataset (Bing News).

Overall, this recommender system has two steps: (1) train an autoencoder for articles ; (2) train RNN base on user-item interactions.

training autoencoder

The raw article has a format of "article_id \t category \t title". I will first build a word dictionary to hash the word to ids and count the TF-IDF statistics. The input for training autoencoder is the TF-IDF values of each article (title). Below is a result of my trained CDAE. (scripts can be found in helper/demo.py): alt text Analysis: I am really surprised by the great power of autoencoder. News titles are usually very short, but autoencoder can recover their intent. For example, for the first news, the input content is 'Travel tips for thanksgiving', our decoded words are (ordered by importance) 'tips, travel, holidays, thanksgiving, enjoy, shopping, period'. Note that the words 'holidays' and 'shopping' do not appear in the original title, but there are captured as strongly related words.

training curve of error:

alt text

training curve of loss:

alt text

training RNN recsys

After training the autoencoder, you need to encode each raw article to get their embeddings:

encode_articles(...)

Finally, train your RNN recsys:

train_RS()

data description:

data/articles.txt: each line is an article, in the form of word_id:word_tf_idf_value

data/articles_CDAE.txt: each line is 3 articles, splited by tab, article_01 and article_02 belong to a same category, while article_03 belongs to a different category

data/RS/articles_embeddings.txt: each line is an article, in the form of article_ID \t embedding_vector (D float numbers, where D denotes the dimension of embedding)

data/RS/train.txt: each line is a training instance, in the form of user_history \t target_item_id \t label. user_history is a sequence of item_id, splited by space.

articles_TFIDF_norm_3w.txt format: each line is one document, such as 14 17513:1.00 27510:0.81 , representing article_id \TAB word_id01:value \ SPACE word_id02:value \SPACE ....

rnn_recsys's People

Contributors

leavingseason avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rnn_recsys's Issues

TypeError: reduce_sum() got an unexpected keyword argument 'keepdims'

mldl@ub1604:/ub16_prj/rnn_recsys$ python3 train.py
launching the program...
1.4.0
Traceback (most recent call last):
File "train.py", line 233, in
train_RS()
File "train.py", line 124, in train_RS
my_model = RNNRS(**hparams)
File "/home/mldl/ub16_prj/rnn_recsys/models/RNNRS.py", line 35, in init
self.predictions, self.error, self.loss, self.train_step, self.summary = self._build_model()
File "/home/mldl/ub16_prj/rnn_recsys/models/RNNRS.py", line 65, in _build_model
preds = tf.sigmoid( tf.reduce_sum(tf.multiply(u_t, self.Item), 1, keepdims = True) + global_bias , name= 'prediction') ##--
TypeError: reduce_sum() got an unexpected keyword argument 'keepdims'
Exception ignored in: <bound method BaseRS.del of <models.RNNRS.RNNRS object at 0x7f77f5a6dba8>>
Traceback (most recent call last):
File "/home/mldl/ub16_prj/rnn_recsys/models/LinearAvgRS.py", line 29, in del
if self.log_writer:
AttributeError: 'RNNRS' object has no attribute 'log_writer'
mldl@ub1604:
/ub16_prj/rnn_recsys$

about the train.txt

hi,dear
what's the meaning of the sentence ?
data/RS/train.txt: each line is a training instance, in the form of user_history \t target_item_id \t label. user_history is a sequence of item_id, splited by space.

user_history is only user clicks ?
target_item_id is the user last click ?
and if the last click is real click , the label is 1, else 0,

could u pls help me ?
thx

word_hashing_file

Hi ,I am a little confused about word_hashing_file = r'Y:\BingNews\Zhongxia\my\articles_wordhashing_3w.obj'.
Dose the word_hashing_file contain some context or is just empty? I don't have this file. Should I create one or the project will create this file automatically when it compiles?

my auc simply not goes up

hi,

Thanks for your sharing.
i'm trying to apply your code to my toy recommendation project. During it, I've developed into 3 concerns:

  1. we have 300k Chinese titles dataset, and after several training epochs of autoencoder, the loss plateaus at 19k+, which is simply too high. Do you have any idea how to fine tune the autoencoder?
  2. I trained RNN with 50k samples, again, after several epochs, the auc goes back and forth around 0.53~0.54, Is there any way to fine tune the model?
  3. Is it a good idea to just fetch the trained user embedding, do the dot product with item embedding to recommend? thanks

Original dataset

hi
you have done a great work . can you please email or share the link of original complete dataset.

I will appreciate.

Thanks

questions about the trainning data

hi,author
I have some questions about the training data.
in train.txt,What each line represents
and in article.txt,The first line and the second line are repeated eighft times.?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.