Comments (4)
Does (C, W) mean global position encoding?
Note that our positional encoding is relative and shared across the other axis. For example, in w-axis attention, each pixel corresponds to W other pixels and W relative positions, so there are (W, W) relative positional encodings in total.
from axial-deeplab.
Does (C, W) mean global position encoding?
Note that our positional encoding is relative and shared across the other axis. For example, in w-axis attention, each pixel corresponds to W other pixels and W relative positions, so there are (W, W) relative positional encodings in total.
Thanks for your helpful reply. I did mean global position encoding by (C, W).
I am also confused about the following line:
axial-deeplab/lib/models/axialnet.py
Line 44 in fe1d052
The confusion is that, since all the position encodings are initialized randomly, I expected that whatever orders we index the relative encoding should result in similar results. So maybe we can index it with a simpler way. But clearly you don't think so by using this
relative_index
. What do I miss?from axial-deeplab.
They are randomly initialized, but for different position they have different relative positional encoding while the same relative distance ones should share the weights.
from axial-deeplab.
Note that our positional encoding is relative and shared across the other axis. For example, in w-axis attention, each pixel corresponds to W other pixels and W relative positions, so there are (W, W) relative positional encodings in total.
If the span-size K
is smaller than the width W
, then do we have the size of (C,W,K)
for the relative position encoding matrix?
So that it's einsum
ed with the query like Q (H,(W,C)) * r^q ((W,C),K) -> A (H,W,K)
? (A: attention matrix)
from axial-deeplab.
Related Issues (20)
- About the class activation map HOT 3
- position-sensitive attention HOT 1
- Seems dist_train.py didn't wrap the model with the synchronize batch norm HOT 2
- Confused about the `transpose` in positional encoding of key HOT 1
- Pretrained weights HOT 3
- how does axial-attention support multi-scale training/testing? HOT 1
- Question about table 9 in paper HOT 3
- What's HERE?? HOT 1
- Training with non-square images HOT 1
- It seems that the code of qkv_transform is missing. HOT 1
- Question about Axial-Res50 HOT 2
- 关于AxialAttention中kernel_size的问题 HOT 1
- Shape of relative position encoding r^q, r^k, r^v HOT 1
- About local constraints HOT 2
- Pretrain_weights HOT 1
- about function parameter “s=0.5” in code
- why batchnormalization after qkv transform?
- Different resolution for inference
- Can it be used in video tasks?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from axial-deeplab.