Git Product home page Git Product logo

semanticsoftsegmentation's People

Contributors

yaksoy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

semanticsoftsegmentation's Issues

Not able to integrate in Octave

Image graphs exist in matgeom package of octave but not able to find it here. Please let me know if this repo can be run in octave

demo
warning: addpath: /home/saurabh/Documents/code/SemanticSoftSegmentation/ImageGraph: No such file or directory
warning: called from
demo at line 4 column 1
Semantic Soft Segmentation
error: 'superpixels' undefined near line 18 column 20
error: called from
Superpixels at line 18 column 18
SemanticSoftSegmentation at line 17 column 17
demo at line 20 column 5

Getting different results!!!!

Hi,
I used your semantic feature generation code to generate the features of the images 'docia.png'. But I am facing difference in the results. Please see the images below and guide me if I am doing anything wrong. You can see the difference in the features generated. After getting the feature vector I am feeding that vector to 'preprocessFeatures.m' file. Is there something else that I need to do?

In this picture I generated the feature through the code and then fed the image as input in demo.m file.
outsamp
This result is generated by the image you provided as docia.png
untitled1

Python port?

Hello

I would love to try the technique introduced in the paper. However I do not have Matlab and find it honestly a not so great idea to mix both Matlab and Python code. Is is possible to release a python port of this project. (Which seems to be the standard for neural networks.)

Best regards

Remove Background

Since, this is the first time I am using matlab and very curious to reproduce every result from the paper; Can you help to remove background. How to proceed in removing background after soft segmentation?
fig

This file is generated after running demo.m. Now i want to remove whatever is there in the background. Please give me a direction to follow?

there are somes errors,can you help me~

when I running this code ,there are some errors:
错误使用 .*
矩阵维度必须一致。

出错 groupSegments (line 12)
cc = segments(:,:,i) .* features;

出错 SemanticSoftSegmentation (line 42)
groupedSegments = groupSegments(initSoftSegments, features);

出错 demo (line 20)
sss = SemanticSoftSegmentation(image, features);

I don't know what the thing I mistake, can you help me to solve this problem.

thankyou !

Description of how to streamline solvers

Hi, @yaksoy
Thank you for your great paper!

I have some questions about ways to solve problems more efficiently.
I think your prototype implementation roughly includes 4 processes excluding a feature extractor and their respective computational costs for a single 640x480 image follow:

  1. calculation of Laplacian matrix L from affinities (~1 sec)
  2. calculation of eigenvectors of L (~ 1 min)
  3. solving a constrained sparsification problem (~3 mins)
  4. solving a relaxed sparsification problem (~30 secs)

By the way, you say in paper,

The efficiency of our method can be optimized in several ways, such as multi-scale solvers, but an efficient implementation of linear solvers and eigendecomposition lies beyond the scope of our paper.

There seems to be 3 ways to make the execution time shorter, multi-scale solver, more efficient linear solvers and eigendecomposition.

Questions:

  1. However, which processes do they correspond to?
  2. What is multi-scale solver?
  3. Can I make the most heavy process (process 3) faster?

preprocessFeatures.m PCA dimensionality reduction problem

in MATLAB command line:

features = load('docia.mat'); %docia.mat is the 128 dimensionality features
image= imread('docia_one.png'); %docia_one.png is the origianl picture
preprocessFeatures(features, image)

The operator '<' that corresponds to the input parameter of type 'struct' is not defined.

wrong preprocessFeatures (line 7)
features(features < -5) = -5;

my analysis:

features is a 640640128 array, but -5 is just a number.
They can not be compared.

### my aim:
I want to use the file preprocessFeatures.m to reduce the dimensionality of hyper-dimensional semantic feature vectors as described in Section 3.5 which is generated by the Python program.

fastKmeans(X,K) function in softSegmentsFromEigs.m

What this function do doesn't seems to be regular K-means, it iteratively separates each cluster into two cluster.
How did you come up with this idea? Or what newly K-means paper did you reference?
Thanks a lot.

partial code in softSegmentsFromEigs.m:

function [idx, C, sumd, D]=fastKmeans(X,K)
  % X: points in the N-by-P data matrix
  % idx - an N-by-1 vector containing the cluster indices of each point
  % c - the K cluster centroid locations in a K-by-P matrix.
  % sumd - the within-cluster sums of point-to-centroid distances in a 1-by-K vector.
  % distances from each point to every centroid in a N-by-K.

  startK = 5;
  startK = min(K,startK);
  maxIters = 100;  % Defualt of matlab is 100    

    X = sign(real(X)) .* abs(X);
  
  [idx, C, sumd, D]=kmeans(X,startK,'EmptyAction','singleton','Start','cluster', ...
      'Maxiter', maxIters, 'Replicates', 7);

  valid_vec = zeros(1,startK);
  scr_vec = zeros(1,startK)-1;

  for compCnt = startK+1:K
      % create a new cluster by splitting each cluster to two...
      max_scr=-1;
      clear min_C;
      for cl = 1:compCnt-1
        cl_mask = idx == cl;
        cl_idxs = find(cl_mask);
        clX = X(cl_idxs,:);
        if (size(clX,1)> 2*size(clX,2))
          if (valid_vec(cl) == 0)
            [tmp_idx, tmp_C, ~, tmp_D]=kmeans(clX,2,'EmptyAction','singleton','Start','cluster', 'Maxiter', maxIters);
            % chk how much the partition helps ...
            scr=sum(min(D(cl_idxs,:),[],2))-sum(min(tmp_D,[],2));
            scr_vec(cl) = scr;
          else % we already saved it...
            scr = scr_vec(cl);
          end         
        else
          scr=-2;
          scr_vec(cl) = scr;
        end  
        if (scr > max_scr)
          if (valid_vec(cl)==1) % not for the scr. Just for the idxs.
            [tmp_idx, tmp_C, ~, ~]=kmeans(clX,2,'EmptyAction','singleton','Start','cluster', 'Maxiter', maxIters);
          end
          
          max_scr = scr;
          bestC = [C;tmp_C(2,:)];  
          bestC(cl,:) = tmp_C(1,:);
          best_cl = cl;         

          best_idx = idx;
          best_idx(cl_idxs) = (tmp_idx == 1)*best_cl + (tmp_idx == 2)*compCnt;
        end
        valid_vec(cl) = 1;
      end        
      C = bestC;
      idx = best_idx;
      
      valid_vec = [valid_vec, 0];   % the two new clusers are new, so their
      valid_vec(best_cl) = 0;      % score have not been computed yet.
      scr_vec = [scr_vec, -1];
      scr_vec(best_cl) = -1;

      if (compCnt < 13)
        [idx, C, sumd, D]=kmeans(X, compCnt, 'EmptyAction', 'singleton', 'Start', C, 'Maxiter', maxIters);       
        valid_vec = zeros(1,compCnt);
      end
  end

end

About the input data of demo.m

I found the image and features used in demo.m as follows:
image = imread('docia.png');
features = image(:, size(image, 2) / 2 + 1 : end, :);
and It can show the segmentation result。
but I have a question,what is the relation between this project and the sister project ,the python realization version http://people.inf.ethz.ch/aksoyy/sss/?
another question:
when I input the 128 dimension features,the segmentation result is similar with the above。
so why do you need the sister implementation? we can get the soft semantic segmentation result just using this project,can we?
many thanks!

Runtime Required For An Image?

This is incredible and exciting to see this grow.

First, I am not sure if I can post questions here. Feel free to close it if it's not the right place.

I wanted to ask, is the runtime per image same as stated in the paper? That is,

runtime for a 640 × 480 image lies between 3 and 4 minutes

The paper also states this

The efficiency of our method can be optimized in several ways, such as multi-scale solvers, but an efficient implementation of linear solvers and eigendecomposition lies beyond the scope of our paper.

So my question is, is this code optimized and could require less runtime or it's going to be 3-4 mins per image as stated in the paper?

Let me know,
Thank you!

Unable to produce the results as per the paper

Hi YAksoy

I was unable o produce the same results as given in the paper on any other image and also new to MATLAB.

  • I used the SIGGRAPH18SSS repo to generate the features that gave me docia.mat file.
  • I later placed the mat file along with the input PNG in this repo and ran the demo.m where I read the docia.png file as the input and the results that I got was as follows.

out

Which is fine but was unable to understand how to get the middle section of the PNG that is highlighted as follows.
op1jpg jpgwith text

Also I am not sure if the code was even using the docia.mat file. And it just used the PNG file below.
docia2

Could you help me on how to run this to get the input file from the features(docia.mat file) as desired.

Regards
Yash

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.