Git Product home page Git Product logo

deep-subspace-clustering-networks's Issues

Not able to reproduce results, Stopping pre-training for ORL dataset?

Hi
Can you please give an idea of around what epoch you stopped the pretraining for ORL dataset ? Even after following your way (early stopping at good visualizations) to stop pre-train, there is a huge variation in the result compared to restoring the pre-trained weights you have provided.

Can't find .ckpt file when call saver.restore

I met a read problem when I call the function restore() in main function, the error message is like below,

Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ~/Documents/DeepSubspace/Deep-subspace-clustering-networks-master/pretrain-model-COIL100/model50.ckpt

To my understanding, when you call self.saver.restore(self.sess, self.model_path), actually you are loading the predefined model from directory, but I look into the model path, there is no file ends with .ckpt, which is the file type that the saver want for loading the variables, I'm wondering if you forget to upload the predefined model file?

ZC or CZ?

Hi,
In your paper, the self-expressiveness property is defined as ||Z-ZC||. However, in your code, such as:
z_c = tf.matmul(Coef,z)
this seems to refer to CZ rather than ZC, right?

How can I get the specific subspace?

How can I get the specific subspace?
I want to analyze the subspaces obtained by clustering. I hope I can get some guidance. I am very grateful!

diag(coeff) =0?

Hi ,
I couldn't find the line that forces the self-expressive matrix's diagonal to be zero so as to avoid the trivial solution.
Can you please point me to that?

Need Help

Could you provide the code for plot the picture :

image

I did not find the function in python?

Replicability of your Research

Greetings, first of all let me say that I read your paper with interest.
Thanks for having shared the code! I'd like to address you some question about it.

  1. By running the code as you provided it (without changes), I am able to almost perfectly reproduce your published results (e.g., 14% error on ORL as in Figure 5.a). This happens if I select the max value of accuracy among 100 random runs, out of which I get a mean accuracy of 84.5%, with a standard deviation of 1.2% on the same benchmark. I was using the same hyper-parameters of yours. Can we then conclude that you followed a similar procedure to obtain the results inserted in the paper?
  2. In the README.md, you correctly mentioned about the usage of the diagonal constraint on the self-expressive module C (diag(C)=0) - the latter being fundamental if using L1 regularization. To implement such constraint, I used the snippet of code that you provide therein (tf.matmul((C-tf.diag(tf.diag_part(C))),Z)). I put such line of code - once I changed the variable names - replacing line 48 inside DSC-Net-L2-ORL.py. Then, I replaced line 59 of the same file with
    self.reg_losses = tf.reduce_sum(tf.math.abs(self.Coef)).
    Unfortunately, if I do so, the results with L1 drastically differs from the ones tabulated in the paper. I'd like to ask you if you have an updated version of the code (maybe already implementing the diag constraint + L1), so that I can check and compare.
  3. Also, I'm encountering an out-of-memory error when running the code on the COIL100 dataset. Can I kindly ask which are the specs of the computer/server you used for experiments (in terms of RAM mainly)?

Thank you very much for your attention
Looking forward to your reply

Applying Deep Subspace Clustering to High Dimensional Data

Having reading through the paper, I have seen that the coding for the Autoencoder has been primarily done by using Convolutional Kernels since the paper had worked with Images.
I need your inputs on finding the subspace for Data for the following format :
N Individual Datapoints each having 1-D dimension of size X (>100) i.e N separate data point of Array size (>100).
The Target is to Cluster these N data points into clusters.
Your inputs will be very helpful.

image_size

Sorry,I have another question.

In the section 4.1 of the article, 'Following the experimental setup of [10], we down-sampled the original face images from 192 × 168 to 42 × 42 pixels ',however, the image size is 48*42 pixels in code?

Parameters of comparative experiment "EDSC"

Hello, could you share me with parameters of "Efficient dense subspace clustering", your another paper, also compared method over COIL20 when you are convenient? I can't reproduce your effect.

How to fix the N problem?

Hi~ Nice job, though, after reading your paper, I have the same question as mentioned in NIPS Reviewer 2:

One of the drawbacks of the proposed approach is the fact that the number of weights between the middle layers of the network is $N^2$, given $N$ data points. Thus, it seems that the method will not be scalable to very large datasets.

Any idea how to fix it? Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.