Git Product home page Git Product logo

angular-penalty-softmax-losses-pytorch's Introduction

Angular Penalty Softmax Losses Pytorch

Concise Pytorch implementation of the Angular Penalty Softmax Losses presented in:

(Note: the SphereFace implementation is not exactly as described in their paper but instead uses the 'trick' presented in the ArcFace paper to use arccosine instead of the double angle formula)

from loss_functions import AngularPenaltySMLoss

in_features = 512
out_features = 10 # Number of classes

criterion = AngularPenaltySMLoss(in_features, out_features, loss_type='arcface') # loss_type in ['arcface', 'sphereface', 'cosface']

# Forward method works similarly to nn.CrossEntropyLoss
# x of shape (batch_size, in_features), labels of shape (batch_size,)
# labels should indicate class of each sample, and should be an int, l satisying 0 <= l < out_dim
loss = criterion(x, labels) 
loss.backward()

Experiments/Demo

There are a simple set of experiments on Fashion-MNIST [2] included in train_fMNIST.py which compares the use of ordinary Softmax and Additive Margin Softmax loss functions by projecting embedding features onto a 3D sphere.

The experiments can be run like so:

python train_fMNIST.py --num-epochs 40 --seed 1234 --use-cuda

Which produces the following results:

Baseline (softmax)

softmax

Additive Margin Softmax/CosFace

cosface

ArcFace

arcface

TODO: fix sphereface results

[1] Deng, J. et al. (2018) ‘ArcFace: Additive Angular Margin Loss for Deep Face Recognition’. Available at: http://arxiv.org/abs/1801.07698.

[2] Liu, W. et al. (2017) ‘SphereFace: Deep hypersphere embedding for face recognition’, in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, pp. 6738–6746. doi: 10.1109/CVPR.2017.713.

[3] Wang, H. et al. (2018) ‘CosFace: Large Margin Cosine Loss for Deep Face Recognition’. Available at: http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_CosFace_Large_Margin_CVPR_2018_paper.pdf (Accessed: 12 August 2019).

[4] “Additive Margin Softmax for Face Verification.” Wang, Feng, Jian Cheng, Weiyang Liu and Haijun Liu. IEEE Signal Processing Letters 25 (2018): 926-930.

[5] "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms." Han Xiao, Kashif Rasul, Roland Vollgraf. arXiv:1708.07747

angular-penalty-softmax-losses-pytorch's People

Contributors

cvqluu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

angular-penalty-softmax-losses-pytorch's Issues

sphereface

您好,请问:如果use arccosine instead of the double angle formula,这样复现的结果是否与sphereface论文中的结果一致?

Better loss implementation, fully GPU support

This is my cleaner, high performance, fully GPU support for the loss function. This implementation does not require torch.cat (which is painfully slow).
I exclude the FC layer and the normalization layer for my project.

import math
import torch

class AMSLoss(nn.Module):

    def __init__(self, m=1.0):
        super(AMSLoss, self).__init__()
        self.m = m
        self.one_minus_exp_m = 1.0 - math.exp(m)

    def forward(self, logits, labels, eps=1e-10):
        """

        :param logits: B x C
        :param targets: B
        :return:
        """
        numerator = torch.diagonal(logits.transpose(0, 1)[labels])
        denominator = torch.sum(torch.exp(logits + self.m), dim=1) + torch.exp(numerator) * self.one_minus_exp_m
        L = numerator - torch.log(denominator + eps)
        return -torch.mean(L)

sum instead of mean?

Would you be up for adding a reduction=... option which allows for sum vs mean?

change the dataset

Hi there, thank you for your code. Here is a question: I changed the dataset using FER2013 instead of Fashion MNIST, the original fer2013 is csv format, and I changed it into jpg, and the dimension become 3 (original dimension is 1, grey images).

Will that affect the 3D mapping pics? Cuz I achieved bad results by changing the dataset. @cvqluu

Question about loss_functions.py

Hi, thank you for great implementation. I appreciate your work as well as your generosity for opening it.

As mentioned in title, I have a question about line 35 of loss_functions.py, as given below

self.fc = nn.Linear(in_features, out_features, bias=False)

To my understanding, I think it would initialize new fully connected layer in each epoch of training.
I don't understand how this layer can be optimized via backpropagation, as it would be re-initialized each time.

It would be a great help if anyone can teach me why such inference is wrong.

How to generate Figure 4 ?

Hi,I wonder how to generate Figure 4 Row Two in the paper CosFace: Large Margin Cosine Loss for Deep Face Recognition.Would you mind to share the code?Thanks!

draw sphere fail

File "D:/project/pydemo/Angular-Penalty-Softmax-Losses-Pytorch/train_fMNIST.py", line 47, in main
plot(am_embeds, am_labels, fig_path='./figs/{}.png'.format(loss_type))
File "D:\project\pydemo\Angular-Penalty-Softmax-Losses-Pytorch\plotting.py", line 28, in plot
ax.set_aspect("equal")
File "D:\Program Files\Python37\lib\site-packages\matplotlib\axes_base.py", line 1264, in set_aspect
'It is not currently possible to manually set the aspect '
NotImplementedError: It is not currently possible to manually set the aspect on 3D axes

Normalizing the layer weights in loss function has no effect

Hello!

The parameters of the fully connected layer in the loss function are normalized in the following way:

for W in self.fc.parameters():
    W = F.normalize(W, p=2, dim=1)

However, the weights of self.fc are not effected from this operation, I checked it with print(self.fc.weight) and print(W). This means that the cosine calculation is actually conducted with non-normalized vectors.

Normalization will not work

I tested it and the normalization will not working:

       for W in self.fc.parameters():
            W = F.normalize(W, p=2, dim=1)

questions about SphereFace

I think the implementation of SphereFace is wrong, because in the original paper of SphereFace:

  1. The hyperparameter 'm', which means the angular restrain, should be no less than 3 in multiclassifition task, but I could not get correct visualizing result when set m bigger than 2.
  2. cos(m*theta) was replaced by another function called 'pht(theta)'.
  3. the feature vector x wasn't normalized in SphereFace, so there is not hyperparameter 's' in SphereFace.

is this code still maintened?

I would like to add some improvements to this repo and I just want to know if anyone will review the changes since it doesn t look like there is much going on

EER question

Hello. I am Sunghee Jung.
I guess this repository is about image verification..
Any chance you are working on speaker verification version of this repo?

License

Hi. Could you please add a license file?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.