Comments (2)
A working implemention is available as of 4f735f4.
In feedbax.loss
, [TargetStateLoss
][https://github.com/mlprt/feedbax/blob/4f735f434ededd122ddeee6957fd911a9e8870c7/feedbax/loss.py#L357] is a subclass of AbstractLoss
that associates a where
, a norm function, and a TargetSpec
.
A TargetSpec
provides information about 1) the target value of the state, 2) the time indices at which the state's value should be penalized (e.g. penalize effector velocity on final time step only), and 3) an array of discounting factors. All of these fields are optional, and partial specifications may be combined.
In TaskTrialSpec
, there is now a field targets: WhereDict[TargetSpec]
through which a subclass of AbstractLoss
can provide trial-by-trial TargetSpec
information to instances of TargetStateLoss
(via TaskTrainer
).
When a TargetStateLoss
instance is called, its spec: Optional[TargetSpec]
field is eqx.combine
'd with any entries in trial_specs.targets
. This allows the user to supply default target values on instantiation of TargetStateLoss
, but also for the task to be designed (example) so that it provides trial-by-trial targets. A target value must be specified either trial-by-trial or as a default -- an error is raised if no target value is available.
A loss_func
must still be passed on instantiating an AbstractTask
subclass. Composing the terms of the loss function is now a little more complicated than it used to be, since we add TargetStateLoss
instances by specifying the part of the state to penalize. We can probably replace the old loss classes like EffectorPositionLoss
with factories/wrappers for TargetStateLoss
, which would simplify the loss function construction again, in some typical use cases.
from feedbax.
One issue that remains is the possibility of multiple targets being specified with respect to the same part of the state. For example, in delayed reaching we might want a separate loss terms for the effector position error with respect to 1) the reach goal, and 2) the initial fixation. The possibility of multiple loss terms on a single target is why I enabled tuple[Callable, str]
keys for WhereDict
, so that a where
lambda can be combined with a unique label. However, this means that we have to make sure that the label
field of TargetStateLoss
matches the string entry in TargetSpec
s constructed by the task. There are a couple of other options here:
- Allow the values of
trial_specs.targets
to be aMapping[str, TargetSpec]
. This doesn't solve the string-matching issue, but it does simplify the allowable keys ofWhereDict
. - Only allow a single target for each part of the state. This should be possible (e.g. goal and fixation targets happen at different times during the reach) but it would mean that some other mechanism (say, in
AbstractLoss
) would be necessary if we want users to be able to distinguish loss contributions from different epochs of a trial.
from feedbax.
Related Issues (20)
- `ModelInput` and passing intervenor parameters to submodels HOT 1
- `AbstractLTISystem` is not general enough HOT 2
- The suitability of `WhereDict`: lambdas as keys HOT 2
- Automatically caching `AbstractTask.validation_trials` HOT 1
- Does the model need access to the task, at construction time?
- Associate types of intervenors with types of staged models HOT 2
- Pretty printing of inputs and outputs of model stages
- General design concerns with `AbstractStagedModel` and `AbstractIntervenor`
- Intervenors, and an issue with the abstract-final pattern HOT 1
- Intervening on model parameters HOT 2
- Typing `ModelStage` HOT 4
- The abstract-final pattern and generics: should `AbstractState` be eliminated? HOT 7
- Support Python>=3.9
- Return validation losses in `TaskTrainerHistory` HOT 1
- Multiple feedback channels and `MultiModel` HOT 1
- Composition and instantiation of muscle models
- Improve the realism and generality of models of musculoskeletal geometry
- Include muscle activation dynamics in the `AbstractMuscle`, not the `AbstractPlant` instance
- Eliminate `AbstractTaskTrialSpec` HOT 1
- Adding intervenors to model ensembles HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from feedbax.