Git Product home page Git Product logo

active-contour-loss's Introduction

Active Contour Loss

Implementation of active contour loss function for medical image segmentation based on "Learning Active Contour Models for Medical Image Segmentation" by Chen, Xu, et al.

Introduction

==The arXiv version of this paper will be available soon. ==

Requirements

Tensorflow >= 1.5

Keras >= 2.0

Numpy

Training

A pretrained model might be suggested to use, because somtimes active contour loss function may not be stable in the early steps for training.

Citation

If you find Active-Contour-Loss is useful in your research, please consider to cite:

@inproceedings{chen2019learning,
  title={Learning Active Contour Models for Medical Image Segmentation},
  author={Chen, Xu and Williams, Bryan M and Vallabhaneni, Srinivasa R and Czanner, Gabriela and Williams, Rachel and Zheng, Yalin},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={11632--11640},
  year={2019}
}

Other Re-implementation

...

active-contour-loss's People

Contributors

xuuuuuuchen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

active-contour-loss's Issues

What's the input shape in AC loss?

Hi @xuuuuuuchen ,

Thanks for sharing the code very much.

  1. What's the input shape in AC loss?
    Intuitively, the input shape is:
    y_true.shape = (batch_size, class num, img_x, img_y) = y_pred.shape.
    For ACDC dataset, the shape is (batch_size, 4, 256,256).

However, in the discussion part, the paper said, "the proposed loss function can also be extended to solve multi-phase segmentation problems". Thus, it means the paper uses AC loss in a binary version, right?

  1. A technique question: what's the purpose for following code?
    delta_x = x[:,:,1:,:-2]**2
    delta_y = y[:,:,:-2,1:]**2

Best,
Jun

Is this really the implementation from the paper?

For example:

C_2 = np.zeros((256, 256))
...
region_out = K.abs(K.sum( (1-y_pred[:,0,:,:]) * ((y_true[:,0,:,:] - C_2)**2) ))

So you're subtracting a tensor of all zeros from y_true in (y_true[:,0,:,:] - C_2)?

why the length is component of the loss?

Hi, I have some questions about the length.
T(length) is the length of target;P(length) is the length of prediction.
And the |P-T| is the component of the loss.
All Right?

can't get good dice score while Using AC?

Hi Xu,

I had used AC Loss to try seg 2-class images. Although the loss is decreasing, dice score doesn't improve at all. On the contrary, dice score is at a fairly low value, about 0.0001.

`
class ActiveContourLoss(Module):
def init(self):
super(ActiveContourLoss, self).init()

def forward(self, y_pred, y_true):

    x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions 
    y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1]
    
    delta_x = x[:,:,1:,:-2]**2
    delta_y = y[:,:,:-2,1:]**2
    delta_u = torch.abs(delta_x + delta_y)
    
    epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice.
    w = 1.

    lenth = w * torch.sum(torch.sqrt(delta_u + epsilon)) # equ.(11) in the paper

    C_1 = torch.ones(y_true.shape, dtype=torch.float32).cuda()
    C_2 = torch.zeros(y_true.shape, dtype=torch.float32).cuda()

    region_in = torch.abs(torch.sum(y_pred * ((y_true - C_1)**2)) ) # equ.(12) in the paper
    region_out = torch.abs(torch.sum((1. - y_pred) * ((y_true - C_2)**2))) # equ.(12) in the paper
    
    lambdaP = 5. # lambda parameter could be various.
    loss = lenth + lambdaP * (region_in + region_out) 

    return loss

`

this is my pytorch implemention, y_pred[:,0,:,:] what is the mean 0, is channels? and y_pred as the input need to sigmoid? the input shape is (channel, batchsize, H, W) or (batchsize, channel, H, W) .
My y_pred shape is (16, 1, 512, 512).So i need modify it?

Best,
qaqzzz

Question about the implementation of coutour extraction

Hi @xuuuuuuchen ,

I get some trouble to understand the implementation to compute the coutour length.

	x = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] # horizontal and vertical directions 
	y = y_pred[:,:,:,1:] - y_pred[:,:,:,:-1]

	delta_x = x[:,:,1:,:-2]**2
	delta_y = y[:,:,:-2,1:]**2
	delta_u = K.abs(delta_x + delta_y) 

According to implementation above, I draw an illustration to visualize the y_pred, delta_x and delta_y:
image

As you can see, the delta_x and delta_y are not well aligned in the coordinate. In that case, the extracted coutour might be be incorrect.

Could you help me figure out what's wrong with my understanding?

Many thanks!

I wonder what is the shape of y_repd in AC loss?

Hi @xuuuuuuchen

In another issue, you said that the input shape in AC loss is [Batch, 1, W, H].

Dose it mean that the y_pred is an output of softmax layer followed by an argmax operator or a pick operator to pick out the probability of fg(or bg)?

If the shape of y_pred is [Batch, 1, W, H], why do you pick the first channel (y_pred[:, 0, :, :]) when you implement the Eq[12] but not do so when you implement the Eq[11] ?

Loss is not minimizing after 2nd epoch.

Hello everyone! I tried to implement the AC loss function on a U-Net model.
I am using the the AC loss as loss function and the Dice Coefficient as metric.

I am using the NIH dataset for training (https://wiki.cancerimagingarchive.net/display/Public/Pancreas-CT).
Since the data format is different (8, 256, 256, 1), I changed the indexes.

My code is the following:

def Active_Contour_Loss(y_true, y_pred): 
    
    y_true = K.cast(y_true, dtype = 'float64') 
    y_pred = K.cast(y_pred, dtype = 'float64') 
    
    """
    lenth term
    """
       
    #Reordered indexes, because data format is different.
    x = y_pred[:,1:,:,:] - y_pred[:,:-1,:,:] #y_true shape is: (8, 256, 256, 1)  
    y = y_pred[:,:,1:,:] - y_pred[:,:,:-1,:] #y_pred shape is: (8, 256, 256, 1)
        
    delta_x = x[:,1:,:-2,:]**2
    delta_y = y[:,:-2,1:,:]**2    
    
    delta_u = K.abs(delta_x + delta_y) 

    epsilon = 0.00000001 # where is a parameter to avoid square root is zero in practice.
    w = 1
    lenth = w * K.sum(K.sqrt(delta_u + epsilon)) # equ.(11) in the paper

    """
    region term
    """

    C_1 = np.ones((256, 256))
    C_2 = np.zeros((256, 256))    
    
    #Reordered indexes, because data format is different.
    region_in = K.abs(K.sum( y_pred[:,:,:,0] * ((y_true[:,:,:,0] - C_1)**2) ) ) # equ.(12) in the paper
    region_out = K.abs(K.sum( (1-y_pred[:,:,:,0]) * ((y_true[:,:,:,0] - C_2)**2) )) # equ.(12) in the paper

    lambdaP = 1 # lambda parameter could be various.
	
    loss =  lenth + lambdaP * (region_in + region_out) # equ. (8) in the paper
    
    return loss

However, the output of the training does not go well:

Epoch 1/20
18750/18750 - 3202s - loss: 3657.6096 - dice_coef: 0.0187 - val_loss: 909.7355 - val_dice_coef: 0.5784
Epoch 2/20
18750/18750 - 3150s - loss: 716.8154 - dice_coef: 0.0240 - val_loss: 909.7353 - val_dice_coef: 0.5785
Epoch 3/20
18750/18750 - 3158s - loss: 717.3530 - dice_coef: 0.0230 - val_loss: 909.7353 - val_dice_coef: 0.5785
Epoch 4/20
18750/18750 - 3161s - loss: 718.0414 - dice_coef: 0.0221 - val_loss: 909.7353 - val_dice_coef: 0.5785
Epoch 5/20
18750/18750 - 3126s - loss: 717.6503 - dice_coef: 0.0222 - val_loss: 909.7353 - val_dice_coef: 0.5785
Epoch 6/20
18750/18750 - 3133s - loss: 718.0792 - dice_coef: 0.0229 - val_loss: 909.7353 - val_dice_coef: 0.5785
Epoch 7/20
18750/18750 - 3092s - loss: 717.6919 - dice_coef: 0.0229 - val_loss: 909.7353 - val_dice_coef: 0.5785

Any ideas what I am doing wrong?

Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.