Git Product home page Git Product logo

purs's People

Contributors

lpworld avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

purs's Issues

Can you provide a runnable version of train.py using one of the public dataset

I met the filename doesn't error while running the train.py

/usr/local/lib/python2.7/dist-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/home/ubuntu/.config/matplotlib/matplotlibrc", line #2
  (fname, cnt))
/usr/local/lib/python2.7/dist-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/home/ubuntu/.config/matplotlib/matplotlibrc", line #3
  (fname, cnt))
Traceback (most recent call last):
  File "train.py", line 95, in <module>
    data = pd.read_csv('test.txt', names=['utdid','vdo_id','click','hour'])
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 709, in parser_f
    return _read(filepath_or_buffer, kwds)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 449, in _read
    parser = TextFileReader(filepath_or_buffer, **kwds)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 818, in __init__
    self._make_engine(self.engine)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 1049, in _make_engine
    self._engine = CParserWrapper(self.f, **self.options)
  File "/usr/local/lib/python2.7/dist-packages/pandas/io/parsers.py", line 1695, in __init__
    self._reader = parsers.TextReader(src, **kwds)
  File "pandas/_libs/parsers.pyx", line 402, in pandas._libs.parsers.TextReader.__cinit__
  File "pandas/_libs/parsers.pyx", line 718, in pandas._libs.parsers.TextReader._setup_parser_source
IOError: File test.txt does not exist

Questions about train/test split, and evaluation metrics

Thank you very much for your contribution!

I have a couple of little questions about dataset generation and evluation metrics.

Question-1: train/test split

Regarding the code block below, I'm curious whether the train/test split operation works correctly. I noticed that the example data in "test.txt" is not in chronological order, which may lead to an incorrect train/test split result. For example, a user interacts multiple videos in order: ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n'], but these behaviors are randomly splited into two parts, then the long-term behaviors of 'n' could be ['a', 'd', 'e', 'g', 'k', 'm'], because the missing behaviors occur in the other part of the dataset.

data = pd.read_csv('test.txt', names=['utdid','vdo_id','click','hour'])
user_id = data[['utdid']].drop_duplicates().reindex()
user_id['user_id'] = np.arange(len(user_id))
data = pd.merge(data, user_id, on=['utdid'], how='left')
item_id = data[['vdo_id']].drop_duplicates().reindex()
item_id['video_id'] = np.arange(len(item_id))
data = pd.merge(data, item_id, on=['vdo_id'], how='left')
data = data[['user_id','video_id','click','hour']]
userid = list(set(data['user_id']))
itemid = list(set(data['video_id']))
user_count = len(userid)
item_count = len(itemid)

validate = 4 * len(data) // 5
train_data = data.loc[:validate,]
test_data = data.loc[validate:,]
train_set, test_set = [], []

Question-2: HR@10

According to the paper, "HR@10, which measures the number of clicks in top 10 recommendations", I didn't find the code for generating top-K recommendation results given a list of user behavior. Did the HR@10 metric implemented correctly? Or did I miss something?

Question-3: Unexpectedness

In my point of view, the "unexpectedness" measures the semantic distance between the current user behavior and a list of recommended items. However, regarding the code below, the program doesn't generate the recommendation list (similar with what mentioned in Question-2), instead, it always calculate the semantic distance of a fixed set of <history_behavior, target_item> pairs.

def unexpectedness(sess, model, test_set):
    unexp_list = []
    for _, uij in DataInput(test_set, batch_size):
        score, label, user, item, unexp = model.test(sess, uij)
        for index in range(len(score)):
            unexp_list.append(unexp[index])
    return np.mean(unexp_list)

In terms of ablation study, in my opinion, different variations should provide different recommendation results and calculate the unexpectedness by using the same standard (i.e., the same item embedding space). However, what implemented in the code actually fixes the <history_behavior, target_item> pairs for evalation, while the unexpectedness scores are calculated by different versions of item embeddings.

Under these circumstances, I think the unexpectedness ablation study makes no sense, as a larger unexpectedness score doesn't imply that the recommendation results really move far away from the user behaviors.

你好!请教您一些问题。

问题1,阅读了论文,以及代码后,我对数据集中的time属性不是很理解,想问您,time具体是指什么,从公共数据集中(movielens)中构建训练数据时,time是指movielens数据集的那部分?
问题2,在代码中计算hit, cov, unexp时,每个指标都调用了一次模型,那可不可合并调用,只调用一次模型?
十分感谢,期待收到您的答复。

questions about dataset and model structure

  1. What does click (or rating) means? In the test.txt, the value varies from 0 to 10. However, in the paper, when testing MovieLens model, ratings are binarized with threshold 3.5. Which one is right? Does rating should be ranged between 0 and 1, or not?

  2. In figure 3, it seems there are two models: Base, and Unexpected. Each model predicts r_ui (Click Through Rate) and Unexp_Factor_ui respectively. However, in code, it seems they are actually one model and jointly trained with logits item_b + concat + user_b + unexp_factor*unexp. The label to train for this entire logits is binary value (click or non-click value) not CTR. Am I understood correctly?

  3. According to paper, r_ui seems to calculated by pass through MLP layer with three inputs: user embedding, long preference (history), and item embedding. However, in the code, final logits is like below
    self.logits = item_b + concat + user_b + unexp_factor*unexp
    It seems item_b + concat + user_b part is r_ui.
    Why does bias item_b and user_b is added at last? And why concat only considers long preference (history) and item embedding?
    (model.py:line.49: concat = tf.concat([long_preference, item_emb], axis=1))

Question about Mean Shift Func

Regarding Chapter 3.1 (Modeling of Unexpectedness) in the paper, historic behavior sequence (length = n) will be clustered into N user interest clusters, and for a new item, unexp is the weighted average distance between each cluster and
embedding of the new item.

But in the code (train.py - mean_shift) :

  1. Each historical item is initialized as a center and iteratively shifted. The number of centers output is equal to the length of the historical sequence (n = N).

  2. When calculating the distance between new item and clusters, all centers are averaged, which is not consistent with the "weighted average" and "multi user interest clusters" represented by formula 3.

Question about generating training dataset

train_user = train_data.loc[train_data['user_id']==user]
    train_user = train_user.sort_values(['hour'])
    length = len(train_user)
    train_user.index = range(length)
    if length > 10:
        for i in range(length-10):
            train_set.append((train_user.loc[i+9,'user_id'], list(train_user.loc[i:i+9,'video_id']), train_user.loc[i+9,'video_id'], float(train_user.loc[i+9,'click'])))

According to the above code, train_set contains, user_id, history, recommended item, and click-or-non click label.

I have two questions here

  1. What I'm curious about is it seems history's last item is the same as the recommended item.
    I thought the recommended item to predict CTR would be totally new, but it wasn't. Can you tell me a reason why history's last element and the recommended item are the same?

  2. As I understood, history is the watching-list of users. However, when making history list, list(train_user.loc[i:i+9,'video_id']) it seems it doesn't care whether the user watched this video or not. Since test.txt contains negative samples which have click label as zero, I thought history should contain only videos which are clicked and omit instances which don't clicked. Can you explain why training data is designed like this?

About Mean shift

Thanks for your work, I have some quesitions about the code.
Could you provide the code of comparison methods or is there any way you can explain the following concerns:
Did you pretrain the cluster model then used the embeddings to evaluate all methods on unexpectiveness or implemented all methods with individual mean shift?
There was no clarification in the paper if I didn't miss any details.

That would be greatly helpful for myself to learn from your resaech.
Thanks for you patience.

Is the mean shift time-consuming?

Sorry, It's me again. Is the mean shift time-consuming when training? In my reproduction, it seems too slow to train MovieLens-20M.

您好,一点疑问

您好,PURS论文中提到的是利用历史消费物品(consumed items)的embedding去计算与新物品的意外性,但是看了您代码里头并没有对历史物品进行是否消费的过滤,不知道是否合理,还是我理解错了,这里就应该用全部的历史行为?
image

questions regarding the implementation of mean shift

Hi Pan, regarding the code below:
def mean_shift(self, input_X, window_radius=0.2):
#input_X: batchsizehist_longembdim
X1 = tf.expand_dims(tf.transpose(input_X, perm=[0,2,1]), 1)
X2 = tf.expand_dims(input_X, 1)
C = input_X
def _mean_shift_step(C):
C = tf.expand_dims(C, 3)
Y = tf.reduce_sum(tf.pow((C - X1) / window_radius, 2), axis=2)
gY = tf.exp(-Y)
num = tf.reduce_sum(tf.expand_dims(gY, 3) * X2, axis=2)
denom = tf.reduce_sum(gY, axis=2, keep_dims=True)
C = num / denom
return C
def _mean_shift(i, C, max_diff):
new_C = _mean_shift_step(C)
max_diff = tf.reshape(tf.reduce_max(tf.sqrt(tf.reduce_sum(tf.pow(new_C - C, 2), axis=1))), [])
return i + 1, new_C, max_diff
def _cond(i, C, max_diff):
return max_diff > 1e-5
n_updates, C , max_diff = tf.while_loop(cond=_cond, body=_mean_shift, loop_vars=(tf.constant(0), C, tf.constant(1e10)))
return C

I don't quite get the idea how this implement mean shift, especially " C = num/denom" part. Is there any reference materials for it? thanks in advance/

Meaning of utility function and r_ui

As I asked you on issue,
I thought there are two different models: Base, and Unexpected.
However, it seems they are jointly trained with one standard, click-or-nonclick labels.
Since they are trained jointly with the same task, It seems it's hard to say r_ui is CTR (or click-or-nonclick) and Unexp_Factor_ui is only considering unexpectedness.
As I understood, one large model contains two separated parallel parts, and they are somehow cooperating to predict click-or-nonclick value. Did I understand correctly? or Can you explain how CTR and unexpectedness parts can clearly do their works respectively?

Is there autoencoders in this implementation?

Hello, your paper is very impressive and helpful to me. I am not familiar with TensorFlow, but I find that the autoencoder is not used here, the latent space embeddings are learned directly in the training process. Is there autoencoders in this implementation?Or did I miss something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.