Comments (5)
yeah I think that makes sense. a format that enables easy transformation to a df e.g. with df = pd.DataFrame(output_dic)
, but without the new dependency
from inseq.
Extracting scores and convert them in pandas format will be made easier by get_scores_dicts
introduced in #157
from inseq.
Hi @MoritzLaurer, thank you for your interest! This part is still quite undocumented, but we hope to add more details in the docs soon!
At the end of the Getting started section in the docs we show an example of the attribution output, that I report here:
>>> print(out)
FeatureAttributionOutput({
sequence_attributions: list with 1 elements of type GradientFeatureAttributionSequenceOutput: [
GradientFeatureAttributionSequenceOutput({
source: list with 13 elements of type TokenWithId:[
'▁Hello', '▁world', ',', '▁here', '\'', 's', '▁the', '▁In', 'se', 'q', '▁library', '!', '</s>'
],
target: list with 12 elements of type TokenWithId:[
'▁Bonjour', '▁le', '▁monde', ',', '▁voici', '▁la', '▁bibliothèque', '▁Ins', 'e', 'q', '!', '</s>'
],
source_attributions: torch.float32 tensor of shape [13, 12, 512] on CPU,
...
})
],
step_attributions: None,
info: {
...
}
})
As you can see, the source sequence contains 13 tokens and the target contains 12 tokens, while the attribution computed with a gradient-based method is a 3D tensor shape [src_len, tgt_len, hidden_size]
. When you call out.show()
to visualize the attribution output, the out.aggregate()
method is called before visualizing the scores, which in turn makes use of the default Aggregator
associated to the output class (the out._aggregator
property).
For gradient methods, the default aggregator used is a SequenceAttributionAggregator
that squeezes the last hidden_size
dimension to return the 2D tensor that is finally passed on for visualization (
To obtain the same output and pair it with the tokens, assuming a gradient method that returns a 3D tensor, you could do something like:
import inseq
model = inseq.load_model("Helsinki-NLP/opus-mt-en-fr", "saliency")
# Produces a FeatureAttributionOutput containing 1 GradientFeatureAttributionSequenceOutput
out = model.attribute(<YOUR_INPUT>)
# The source and, if present, target attributions have shapes of [src_len, tgt_len] and [tgt_len, tgt_len]
# respectively after this step
aggregated_attribution = out.sequence_attributions[0].aggregate()
# Creating a mapping of [src_token, tgt_token] -> attribution score
score_map = {}
for src_idx, src_tok in enumerate(aggregated_attribution.source):
for tgt_idx, tgt_tok in enumerate(aggregated_attribution.target):
score_map[(src_tok.token, tgt_tok.token)] = aggregated_attribution.source_attributions[src_idx, tgt_idx].item()
print(score_map)
{('▁Hello', '▁Bonjour'): 0.8095492720603943,
('▁Hello', '▁le'): 0.5914772152900696,
('▁Hello', '▁monde'): 0.655048131942749,
('▁Hello', ','): 0.6247086524963379,
('▁Hello', '▁voici'): 0.7142019271850586,
('▁Hello', '▁la'): 0.623748779296875,
('▁Hello', '▁bibliothèque'): 0.3409218192100525,
('▁Hello', '▁Ins'): 0.28728920221328735,
('▁Hello', 'e'): 0.18802204728126526,
('▁Hello', 'q'): 0.13516321778297424,
('▁Hello', '!'): 0.792391300201416,
('▁Hello', '</s>'): 0.7535314559936523,
('▁world', '▁Bonjour'): 0.39373481273651123,
('▁world', '▁le'): 0.3593481779098511,
...
Hope it helps! I'd be curious to hear ideas you might have on how a better API to access such scores could look like!
from inseq.
great, that works, thanks! (intuitively I would probably enable people to return this as a pandas dataframe for downstream analysis, but that would probably add another dependency)
from inseq.
I am not sure we want pandas
as dependency, since that would be the only use-case for it. Would a list of dicts iǹ record format also work in your opinion? Every dict would have src_token_x
as key and as value a dict of tgt_token_x
: src_x_to_tgt_x_saliency
scores. The user could then feed this to pd.DataFrame()
to produce a dataframe matching the format of the original attribution tensor.
from inseq.
Related Issues (20)
- Error of input text not matching decoded output of tokenizer
- Add ALTI+ implementation
- Add `scores_precision` parameter to `FeatureAttributionOutput.save`
- Add Saliency Cards to documentation
- Visualize Attention Weights for a Decoder Only Model HOT 1
- Confusing call of `merge_attributions` method HOT 1
- `SubwordAggregator` challenging to use for non-SentencePiece models
- How can I anaylse integrated_gradients for Decoder-Only models? HOT 6
- On the interoperability with ferret HOT 3
- Support to from_pretrained args HOT 1
- Long prompt for decoder only model HOT 4
- Expected all tensors to be on the same device HOT 3
- Applying inseq on text generation problems HOT 17
- Module 'jax' has no attribute 'typing' HOT 3
- Contrastive explanations HOT 4
- CUDA out of memory error. HOT 2
- Add a CHANGELOG.md file
- Move documentation to `mkdocs`
- Slow attribution possibly due to FeatureAttributionSequenceOutput.from_step_attributions HOT 7
- Producing heatmap visualizations with SubwordAggregator and SequenceAttributionAggregator HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from inseq.