Git Product home page Git Product logo

Comments (14)

matenure avatar matenure commented on June 14, 2024 1

i) I guess some of the errors are due to the version update of tensorflow. For example, tf.concat(0, attention) is the correct usage in previous version, but now I think the api has been changed (I have not used tensorflow for a while, now I am mainly using pytorch)

ii)output_dim = input_dim , here the input_dim is 222 instead of 582. The other shapes seem correct

from mvgae.

matenure avatar matenure commented on June 14, 2024

The drug indication vectors are just multi-hot vectors. Each dimension is an indication.
To see one example, you can refer to the SIDER database: http://sideeffects.embl.de/drugs/2756/

from mvgae.

bbjy avatar bbjy commented on June 14, 2024

Thank you for your reply.I will try to calculate the feature vectors as you said. @matenure

from mvgae.

Abhinav43 avatar Abhinav43 commented on June 14, 2024

Hi @bbjy , I am also facing same dataset problem. Have you found any way to run cora or any example dataset with this network?

Looking forward to your reply. Thank you:)

from mvgae.

bbjy avatar bbjy commented on June 14, 2024

@Abhinav43 Sorry, I failed to run this network.

from mvgae.

Abhinav43 avatar Abhinav43 commented on June 14, 2024

@bbjy what was the error you were getting?

I am facing attention error :

    def attention(self):
        self.attweights = tf.get_variable("attWeights",[self.num_support, self.output_dim],initializer=tf.contrib.layers.xavier_initializer())
        #self.attbiases = tf.get_variable("attBiases",[self.num_support, self.output_dim],initializer=tf.contrib.layers.xavier_initializer())
        attention = []
        self.attADJ = []
        for i in range(self.num_support):
            #tmpattention = tf.matmul(tf.reshape(self.attweights[i],[1,-1]), self.adjs[i])+tf.reshape(self.attbiases[i],[1,-1])
            tmpattention = tf.matmul(tf.reshape(self.attweights[i], [1, -1]), self.adjs[i])
            #tmpattention = tf.reshape(self.attweights[i],[1,-1]) #test the performance of non-attentive vector weights
            attention.append(tmpattention)
        attentions = tf.concat(0, attention)
        self.attention = tf.nn.softmax(attentions,0)
        for i in range(self.num_support):
            self.attADJ.append(tf.matmul(tf.diag(self.attention[i]),self.adjs[i]))

        self.mixedADJ = tf.add_n(self.attADJ)

I think there is any error or is it possible to concat with 0?
attentions = tf.concat(0, attention)

from mvgae.

matenure avatar matenure commented on June 14, 2024

@Abhinav43 attentions = tf.concat(0, attention) is not concatenating with 0, but concatenating attention along the first dimension:)
Sorry for my late reply due to travel. I will also address the other issues soon.

from mvgae.

Abhinav43 avatar Abhinav43 commented on June 14, 2024

Hi Ma, thanks for your reply.

I gone through your code and I am facing some issues :

i ) If I am doing this way it's working :

    support = []
    preprocess_adj = preprocess_adj_dense(adjs)
    for adj in adjs:
        support.append(preprocess_adj)
        num_supports = len(support)

but if I am doing this way as shown in code then I am getting error :

    support = []
    for adj in adjs:
        support.append(preprocess_adj_dense(adj))
    num_supports = len(support)

ValueError: shapes (1,2708) and (1,1) not aligned: 2708 (dim 1) != 1 (dim 0)

ii ) I got your point tf.concat(0,attention) is attention along the first dimension , but tf.concat documentation says if should have two matrix to concat and axis parameter is given after matrix value so the syntax is :

tf.concat(
    values,
    axis,
    name='concat'
)

So in tf.concat(0,attention) , attention is a list and where is the second matrix whom with we are concatenating? also why you have passes axis = 0 as values argument in tf.concat?

ii )
I was having little bit difficulty to understand the flow of network :

so suppose our hidden_1 = 64 , hidden_2 = 16 , output_dim = 4

adj shape : (222, 222)
features/x : (222, 582)

input_dim = features.shape[1]

where input_dim is the same as feature_matrix's first dim

first_layer         =  first_convo_layer ( input_dim      = input_dim,
                                                            output_dim     = hidden1,
                                                            placeholders  =self.placeholders,
                                                            support           =self.mixedADJ ) 


first_layer (   [        [ features]       *      weight_shape [ input_dim , output_dim ]      )
first_layer (   [ 222, 582 ]              *      [ 582 , 16 ]  )     -->  [ 222 , 16 ]  


attention ( [ something , 222 ]                 *        [ 222 , 16 ]    )       ---> [ something , 16 ]


second_layer      =  second_convo_layer ( input_dim      = hidden_1,
                                                            output_dim     = input_dim ,
                                                            placeholders  =self.placeholders,
                                                            support           =self.mixedADJ ) 

second_layer ( [ something, 16 ]  * [ 16 , 582 ] ) --> [ something , 582 ]

attention ( [ something , 222 ] *  ?? ] )

I am confuse to calculate the shapes in network flow can you show me the flow of network with shapes ?

Thank you , I am sorry if I am troubling a lot :)

Waiting for your reply.

from mvgae.

Abhinav43 avatar Abhinav43 commented on June 14, 2024

Thank you for reply :)

from mvgae.

SudhirGhandikota avatar SudhirGhandikota commented on June 14, 2024

Hello @matenure , I ran into an issue related to the dimensions too and hence didn't chose to create a new issue, Hope that is fine with you.

The issue here is, the shape or dimensions used for the attention weights (and hence the size of the attention weights) is equal to the dimensions of the labels as as seen in the AttSemiGraphEncoder class.
self.output_dim = placeholders['labels'].get_shape().as_list()[1]

But while computing the attention scores, matrix multiplication is being performed with individual adjacency matrices, which are square matrices where num_of_rows = num_of_cols = num_of_Drugs.
So this step, understandably, is throwing an error since the dimensions are not compatible for matrix multiplication. Don't you think the dimensions of attention weights (i.e. the row vector size) should be equal to the number of nodes or number of drugs??

Just to illustrate this, if the drug labels are dimension 6 vectors and the number of drugs is 1000. The attention weights are of dimension 6 right now, while it should be of dimension 1000.

P.S. great paper by the way 👍

from mvgae.

matenure avatar matenure commented on June 14, 2024

@SudhirGhandikota You are right, the dimensions of the attention weights are equal to the number of nodes/drugs. And in fact the code "self.output_dim = placeholders['labels'].get_shape().as_list()[1]" did this:) Because in our codes we did not discriminate the different types of DDIs, and our label is a vector with length of the number of nodes (each dimension means whether this drug connects to a target drug). I know It is not a very elegant way...

from mvgae.

SudhirGhandikota avatar SudhirGhandikota commented on June 14, 2024

@matenure Thanks for the confirmation 👍

from mvgae.

rojinsafavi avatar rojinsafavi commented on June 14, 2024

Is there a pytorch version available?

from mvgae.

matenure avatar matenure commented on June 14, 2024

Is there a pytorch version available?

@rojinsafavi Unfortunately we do not have a PyTorch version.

from mvgae.

Related Issues (7)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.