Comments (6)
I'm not sure what we should add to the documentation.
Not an expert on that, but in multi-objective optimization in principle you want to obtain a whole pareto frontier.
I wouldn't mention the term "multi-objective" anywhere in the documentation.
We could stress in more places that withgradient
can return additional outputs if we think it deserves more prominence.
from flux.jl.
There currently isn't any mention of how to look at multiple outputs using withgradient
in the current Flux docs. We don't have to stress multi-objective optimization, but I think there definitely needs to be documentation in there for how to track multiple losses using withgradient
. Currently, you have to take a look at Zygote, which I couldn't find until @mcabbot pointed it out for me.
from flux.jl.
I guess where I'm confused is why this feature request is about multi-objective losses? Mostly because this:
loss, grads = withgradient(model) do m
a = loss_a(m, x)
b = loss_b(m, x)
c = a + b
return c
end
Is still training with a "multi-objective loss" from my perspective. What the recent withgradient
change unlocked is the ability to return auxiliary state which will not have gradients backpropped through them. It just so happens that the example in the issue uses this mechanism to sneak out the individual losses for subsequent code, but it would be equally valid to use it for e.g. embeddings or non-differentiable metrics. Therefore, is this request more about documenting how to use auxiliary state with withgradient
and Flux models with examples, or is it about showing how to optimize a model with multiple joint losses (and how similar that would be to what you'd do with other libraries, just return loss1 + loss2 + ...
)?
from flux.jl.
#2331 suggests to add this bullet: https://github.com/FluxML/Flux.jl/pull/2331/files#diff-791e8b024a9ce7e7f89b45b7582d628d3d8d55f0bb5e17c39f8a50bd6aa21aeaR228-R230
from flux.jl.
@ToucheSir, the functionality you included in your previous post is well-documented. That code will calculate the gradients and the total loss of the combined individual loss terms. I am interested in using withgradient
to return the loss value of each individual loss term (in your code, I would like to track the individual loss a and b every epoch). It doesn't affect how the network trains, but it gives you some insight into how the network is performing on each individual loss term. I'm studying Physics Informed Neural Networks (PINNs) and it is important to see how the network is handling the initial, boundary and physics loss terms individually. Without the below functionality
trio, grads = withgradient(model) do m
a = loss_a(m, x)
b = loss_b(m, x)
(; c=a+b, a, b)
end
you would have to call withgradient
on loss component a and b individually and manually add the gradients back together before passing to the optimizer. I would like to see this syntax added to the Flux documentation as it is only currently documented in the Zygote package.
@mcabbott, yes even something small like that would really help! Just something to make it more explicit that this functionality exists.
from flux.jl.
Ok, so in that case I agree with Carlo that the documentation should not mention multi-objective losses specifically, but rather focus on getting auxiliary information out and perhaps provide individual loss terms as an example.
from flux.jl.
Related Issues (20)
- Illegal Memory Access Error During Gradient Calculation of predefined losses on GPU RTX 4050 HOT 1
- Unnecessarily using shared GPU memory HOT 8
- Flux installation error under Julia 1.10 on Apple Silicon HOT 2
- Given that DataLoader implements `length` shouldn't it also be able to provide size? HOT 4
- The dedicated tutorial on DataLoader is missing HOT 2
- Incorrect link on docs HOT 4
- Hard error using dice loss HOT 2
- Compilation time of Flux models HOT 1
- Flux.setup buggy and broken in latest v.0.13.17 HOT 3
- example for using apple GPU with flux HOT 4
- Dimensions check for `Conv` is incomplete, leading to confusing error HOT 1
- 2x performance regression due to 5e80211c3302b5e7b79b4f670498f5a68af6659b HOT 2
- Why is Flux.destructure type unstable? HOT 3
- bad formatting for PairwiseFusion docstring HOT 1
- Zero-sized arrays cannot be applied to Dense layers. HOT 4
- Adding Simple Recurrent Unit as a recurrent layer
- Collecting PyTorch -> Flux migration notes
- tests are failing due to ComponentArrays HOT 2
- deprecate Flux.params HOT 7
- Significant time spent moving medium-size arrays to GPU, type instability HOT 10
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from flux.jl.