Git Product home page Git Product logo

Comments (4)

LMZimmer avatar LMZimmer commented on May 19, 2024

Hey,
I agree there should be a nice way to add your own loss function. For now you can use the develop branch with this:

from autoPyTorch import AutoNetRegression
from autoPyTorch.pipeline.nodes.loss_module_selector import  LossModuleSelector

autonet = AutoNetRegression()

loss_selector = autonet.pipeline[LossModuleSelector.get_name()]
loss_selector.add_loss_module('quantile_loss', QuantileLoss)

results_fit = autonet.fit(X_train=X_train,
                          Y_train=Y_train,
                          loss_modules=["quantile_loss"],  ...)

where QuantileLoss is your implementation of the quantile loss that should behave like a module from from torch.modules.loss. Hope this helps

from auto-pytorch.

maxmarketit avatar maxmarketit commented on May 19, 2024
class QuantileLoss(_Loss):
    # import numpy as np
    # import warnings

    __constants__ = ['reduction']

    def __init__(self, size_average=None, reduce=None, reduction='mean', qs=(0.5,)):
            # size_average와 reduce는 앞으로 deprecated될 예정, reduction을 사용하라
            # reduction = 'mean' or 'sum' or 'none'
        super(QuantileLoss, self).__init__(size_average, reduce, reduction)
        if not(isinstance(qs, tuple)) and not(isinstance(qs, np.ndarray)):
            raise ValueError('qs must be either list or np.array')
        if isinstance(qs, tuple):
            qs = np.array(qs)
        if isinstance(qs, np.ndarray) and not (len(qs.shape) == 1):
            raise ValueError('qs must be shape of 1 dim')

        self.qs = qs

    def forward(self, input, target):
        # 위의 l1_loss에서 따옴
        # l1_loss(input, target, size_average=None, reduce=None, reduction='mean')
        reduction = self.reduction
        #if not torch.jit.is_scripting():  # 뭐 하는 곳인지 모름
        #    raise ValueError('QuantileLoss: torch.jit.is_scripting() not implemented')
        #    #tens_ops = (input, target)
        #    #if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops):
        #    #    return handle_torch_function(
        #    #        l1_loss, tens_ops, input, target, size_average=size_average, reduce=reduce,
        #    #        reduction=reduction)
        if not (target.size() == input.size()):
            warnings.warn("Using a target size ({}) that is different to the input size ({}). "
                          "This will likely lead to incorrect results due to broadcasting. "
                          "Please ensure they have the same size.".format(target.size(), input.size()),
                          stacklevel=2)
            
        qs = torch.tensor(self.qs, dtype = input.dtype, requires_grad=False)
        #if target.requires_grad: # 아마 target이 고정되어 있는 경우 else:에서 빠른 처리가 가능한 듯.
        if True:
            e = target - input           
            # if dim of e and self.qs does not match, it will generate error.
            ret = torch.max(qs * e, (qs - 1) * e)
            #ret = torch.abs(input - target) # MAE 
        if reduction != 'none':
            ret = torch.mean(ret) if reduction == 'mean' else torch.sum(ret)
        else:
            raise ValueError('not(target.requires_grad): not yet implemented')
            #expanded_input, expanded_target = torch.broadcast_tensors(input, target)
            #ret = torch._C._nn.l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
        return ret

I implemented Quantile Loss for general q's.

I adopted l1_loss and deleted the part which I do not understand...
(for example,
if not torch.jit.is_scripting():, if target.requires_grad:, and if reduction == 'none'.)

It works fine, at least for the toy example I experimented.

Do you know anything about the part I ignored above?

Is it possible to request pull for autopytorch or pytorch(I might be asking in the wrong place)?

from auto-pytorch.

maxmarketit avatar maxmarketit commented on May 19, 2024

The use case,

#ypred = torch.tensor(np.array([[4, 3, 4],[3,2,3],[1,1,2]]), dtype=torch.float64, requires_grad = True)
ypred = torch.tensor(np.array([[0.1, -0.3, 0.2]]), dtype=torch.float64, requires_grad=True)
ytrue = torch.tensor(np.array([[1,1,1],[2,2,2],[3,3,3],[4,4,4], [5,5,5], [6,6,6], [7,7,7], [8,8,8], [9,9,9], [10,10,10]]), dtype=torch.float64, requires_grad=False)

optimizer = torch.optim.Adam([ypred], lr=0.01)
cLoss = QuantileLoss(qs = (0.1, 0.5, 0.9))
for i in range(2000):
    loss = cLoss(ypred, ytrue)
    if i % 400 == 0:
        print(i, 'loss={:2.2}\n'.format(loss.item()), 
              'value=', ypred)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

from auto-pytorch.

maxmarketit avatar maxmarketit commented on May 19, 2024

I have tried as you mentioned, the result is that,

develop branch doesnt seem to be stable.

as I tried the tutorial, error occurs.

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-6-66db46cdcea7> in <module>
      1 # Get the ConfigSpace object with all hyperparameters, conditions, default values and default ranges
----> 2 hyperparameter_search_space = autonet.get_hyperparameter_search_space()
      3 
      4 # Print all possible configuration options
      5 #autonet.print_help()

~/anaconda3/envs/autopytorch/lib/python3.6/site-packages/autoPyTorch-0.0.2-py3.6.egg/autoPyTorch/core/api.py in get_hyperparameter_search_space(self, X_train, Y_train, X_valid, Y_valid, **autonet_config)
    101                                                  Y_valid=Y_valid)["dataset_info"]
    102 
--> 103         return self.pipeline.get_hyperparameter_search_space(dataset_info=dataset_info, **pipeline_config)
    104 
    105     @classmethod

~/anaconda3/envs/autopytorch/lib/python3.6/site-packages/autoPyTorch-0.0.2-py3.6.egg/autoPyTorch/pipeline/base/pipeline.py in get_hyperparameter_search_space(self, dataset_info, **pipeline_config)
    109         for name, node in self._pipeline_nodes.items():
    110             #print("dataset_info" in pipeline_config.keys())
--> 111             config_space = node.get_hyperparameter_search_space(**pipeline_config)
    112             cs.add_configuration_space(prefix=name, configuration_space=config_space, delimiter=ConfigWrapper.delimiter)
    113 

~/anaconda3/envs/autopytorch/lib/python3.6/site-packages/autoPyTorch-0.0.2-py3.6.egg/autoPyTorch/pipeline/nodes/imputation.py in get_hyperparameter_search_space(self, dataset_info, **pipeline_config)
     55 
     56         cs = ConfigSpace.ConfigurationSpace()
---> 57         cs.add_hyperparameter(CSH.CategoricalHyperparameter("strategy", possible_strategies))
     58         self._check_search_space_updates()
     59         return cs

ConfigSpace/hyperparameters.pyx in ConfigSpace.hyperparameters.CategoricalHyperparameter.__init__()

TypeError: Using a set of choices is prohibited as it can result in non-deterministic behavior. Please use a list or a tuple.

many suggestions?

The error occurs while executing hyperparameter_search_space = autonet.get_hyperparameter_search_space() in the Auto-PyTorch Tutorial.ipynb

from auto-pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.