Git Product home page Git Product logo

blob_loss's People

Contributors

neuronflow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

whuhxb aymuos15

blob_loss's Issues

Mean Average loss do not seem at right indentation

Hi!

Maybe this is of my poor understanding
but I would find it more "logical"

to unindent from line 148 to 157 in blob_loss.py

            # compute mean
            vprint("label_loss:", label_loss)
            # mean_label_loss = 0
            vprint("blobs in crop:", len(label_loss))
            if not len(label_loss) == 0:
                mean_label_loss = sum(label_loss) / len(label_loss)
                # mean_label_loss = sum(label_loss) / \
                #     torch.count_nonzero(label_loss)
                vprint("mean_label_loss", mean_label_loss)
                element_blob_loss.append(mean_label_loss)

so that the element_blob_loss is only added once for each blob in batch ? Is there a specific reason for which it is at this indentation level ?

Best

the converage question about blob_loss

I have been following your blob_loss recently. My dataset is CT plaque data. When I apply blob_loss to my dataset, for each conected domain the applied loss is dice_loss for each conected domain, and I find that the blob_loss does not converge with 200 epoch. I hope you can give me some advice. and the code is as follow:
class bblob_loss(nn.Module):
def init(self):
super(bblob_loss, self).init()
self.dice=DiceLoss()
def forward(self, network_outputs, multi_label):
#输出不同的output
network_outputs = torch.sigmoid(network_outputs)

	batch_length = multi_label.shape[0]
	element_blob_loss = []
	# loop over elements
	for element in range(batch_length):
		if element < batch_length:
			end_index = element + 1
		elif element == batch_length:
			end_index = None

		element_label = multi_label[element:end_index, ...]
		element_output = network_outputs[element:end_index, ...]
		from skimage import measure
		element_label = element_label.squeeze(0).cpu().numpy()
		element_label = element_label.squeeze(0)
		element_label = measure.label(element_label)
		element_label = torch.from_numpy(element_label)
		element_label = element_label.unsqueeze(0)
		element_label = element_label.unsqueeze(0).cuda()

		element_output = element_output.squeeze(0).detach().cpu().numpy()
		element_output = element_output.squeeze(0)
		element_output = measure.label(element_output)
		element_output = torch.from_numpy(element_output)
		element_output = element_output.unsqueeze(0)
		element_output = element_output.unsqueeze(0).cuda()
		# print('element_output', len(torch.unique(element_output)))
		# print('label',element_label.size())
		unique_labels = torch.unique(element_label)
		blob_count = len(unique_labels) - 1
		label_loss = []
		for ula in unique_labels:
			if ula != 0:
				label_mask = element_label > 0
				label_mask = ~label_mask
				label_mask[element_label == ula] = 1
				the_label = element_label == ula
				the_label_int = the_label.int()
				masked_output = element_output * label_mask
				# print('masked_output',masked_output.size(),'the_label_int',(the_label.float()).size())
				try:
					blob_loss = self.dice(masked_output, the_label_int)
					# print('bblos',blob_loss)
				except:
					# if int does not work we try float
					blob_loss = self.dice(masked_output, the_label.float())
					# print('bblos111', blob_loss)
				label_loss.append(blob_loss)
				# print('label_loss',label_loss)

			if not len(label_loss) == 0:
				mean_label_loss = sum(label_loss) / len(label_loss)
				element_blob_loss.append(mean_label_loss)
				# print('element_loss',element_blob_loss)

	if not len(element_blob_loss) == 0:
		mean_element_blob_loss = sum(element_blob_loss) / len(element_blob_loss)
	return mean_element_blob_loss

What are instances in Blob_loss function?

Hi, thanks for sharing newly designed loss function in medical segmentation!

I have followed the the code, and I just wonder what are the instances in the code.

for element in range(batch_length):

If I understood the above code correctly, it looks like multi-labels contains how many instances are there. So, is this the number of ground truth instances?

multi_label.shape[0] # is this number of instances from ground truth label?

The confusing point is whether it is needed to separate each component from one ground truth label. Let's say, tumor in brain, and it has 10 tumors in brain but it is labeled in one single 3D nifti file. Maybe it is needed to be transformed into the shape"[10, depth, height, width]" to be used for blob function which you provided.

My follow-up question is .. then.. prediction has typically unmatched number of instances compared to ground truth labels. Have you considered this as well? Like, prediction might has over 20 tumors instances compared to 10 ground truth. Although one can list them from biggest component to smallest and try to map with ground truth, I am not sure whether they could be properly compared.

I think I missunderstand big part in your code from the beginning. I will be appreciated if you can clarify. I liked the idea in blob_loss, penalizing more FP along increasing the number of instances. Since I've also observed in my task, there are a lot of FP scattered or connected part in predictions. I hope I can get some idea from your research.

Best
Joey

each instance in the label is assigned different pixel value?

according to the blob_loss.py,is each instance in the label image assigned different pixel value. For example, pixel 1 represent instance 1, pixel 2 represents instance 2. This is much different because we often label every instance within one class the same pixel value.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.