Comments (11)
Hi, thanks for your attention.
- I visualize the attention map by calculating relationship by using PCT model (including neighbor embedding) between different points.
- The norm dimension is right. The first softmax does not play a role of normalization but plays a role in eliminating the impact of scale. The second normalization plays a role of normalization. For detailed explanation, please read this paper
- 12G memory seems not enough for part segmentation. We conduct experiments by using 3090 or RTX.
from pct.
Thanks for your reply!
But you have 4 attention layers, which attention layer are you visualizing?
Do you visualizing another attention layer?
In my exp, the 1st and 3rd attention layer just attention the neighboring points(1st is much wider than 3rd), and another attention layer focus on irrelevant area. I don't know what is the meaning of 2nd, 4th attention blocks.(I visualize sPCT in your paper)
from pct.
- We visualize the average of 4 attention layers.
- Yes, i also try to visualize all attention layers of PCT and it produce simliar results. However, we do not try to visualize SPCT.
from pct.
How to visualize the average of 4 attention layers? do you use element-wise addition on all four N*N attention map and divided by 4 and visualize?
OK it seems that PCT is much stronger than SPCT, haha.
感謝您的回覆!
from pct.
Thanks for your reply! But you have 4 attention layers, which attention layer are you visualizing? Do you visualizing another attention layer? In my exp, the 1st and 3rd attention layer just attention the neighboring points(1st is much wider than 3rd), and another attention layer focus on irrelevant area. I don't know what is the meaning of 2nd, 4th attention blocks.(I visualize sPCT in your paper)
Hi! can you share your visualization.py? I try many times but fail. Many Thanks!
from pct.
`
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from pylab import *
def Visualize_Attention_Map(u, xyz, attention, axis):
coordi = xyz.cpu().permute(1, 0)
fig = plt.figure(dpi=100, frameon=False)
ax = fig.gca(projection='3d')
if not axis:
plt.axis('off')
the_fourth_dimension = attention.cpu().numpy()
the_fourth_dimension = (the_fourth_dimension-min(the_fourth_dimension))/(max(the_fourth_dimension)-min(the_fourth_dimension))
colors = cm.cividis(the_fourth_dimension)
ax.scatter(coordi[:,0], coordi[:,1], coordi[:,2], c=colors, marker='o', s=10)
ax.scatter(coordi[u,0], coordi[u,1], coordi[u,2], c='r', s=100)
colmap = cm.ScalarMappable(cmap=cm.cividis)
colmap.set_array(the_fourth_dimension)
fig.colorbar(colmap)
plt.show()
`
input:
u: (0 ~ N) u-th points(data) you want to see
xyz: (3, N) the corresponding coordinate of points
attention: (N) the attention map
axis: (True or False) whether you want to visualize axis
I use matplotlib to print the map and here is the code. Tell me if you have any problems.
ok I can't edit perfectly...
from pct.
` import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from pylab import *
def Visualize_Attention_Map(u, xyz, attention, axis): coordi = xyz.cpu().permute(1, 0) fig = plt.figure(dpi=100, frameon=False) ax = fig.gca(projection='3d')
if not axis: plt.axis('off') the_fourth_dimension = attention.cpu().numpy() the_fourth_dimension = (the_fourth_dimension-min(the_fourth_dimension))/(max(the_fourth_dimension)-min(the_fourth_dimension)) colors = cm.cividis(the_fourth_dimension) ax.scatter(coordi[:,0], coordi[:,1], coordi[:,2], c=colors, marker='o', s=10) ax.scatter(coordi[u,0], coordi[u,1], coordi[u,2], c='r', s=100) colmap = cm.ScalarMappable(cmap=cm.cividis) colmap.set_array(the_fourth_dimension) fig.colorbar(colmap) plt.show()
`
input: u: (0 ~ N) u-th points(data) you want to see xyz: (3, N) the corresponding coordinate of points attention: (N) the attention map axis: (True or False) whether you want to visualize axis
I use matplotlib to print the map and here is the code. Tell me if you have any problems. ok I can't edit perfectly...
Thanks!!But I have some problems. The attention tensor is in (Batch,256,256)(sa1,sa2,sa3,sa4 the same),as your instruction,attention: (N) the attention map is in N?How should I change it?
from pct.
you can just call this function into your model. here is my partial code:
`#################################### partial code for model ####################################
x, attention1 = self.sa1(x)
x, attention2 = self.sa2(x)
x, attention3 = self.sa3(x)
x, attention4 = self.sa4(x)
#################################### full code for visualize ####################################
for i in range(2048):
coordi = xyz[8].cpu()
atten = attention1[8,I,:] #if I want to see the first attention map
Visualize_Attention_Map(i, coordi, attention[8,i,:], False)`
I simply return the attention map and use for loop to visualize the attention map for each points
see my input of Visualize_Attention_Map and you will know why the size of attention is N.
from pct.
you can just call this function into your model. here is my partial code:
`#################################### partial code for model #################################### x, attention1 = self.sa1(x) x, attention2 = self.sa2(x) x, attention3 = self.sa3(x) x, attention4 = self.sa4(x)
#################################### full code for visualize ####################################
for i in range(2048): coordi = xyz[8].cpu() atten = attention1[8,I,:] #if I want to see the first attention map Visualize_Attention_Map(i, coordi, attention[8,i,:], False)`
I simply return the attention map and use for loop to visualize the attention map for each points see my input of Visualize_Attention_Map and you will know why the size of attention is N.
I use this code to visualize the attention map,but the results are not the same as in the paper. Do you realize the results? I look forward your reply!
from pct.
请问您能发布一下您pytorch重现的部分分割的代码吗?我自己重现的效果很差,如果可以的话希望能看看您重现的代码,感激不尽
from pct.
Hi, First of all, I want to thank you for the proposed method , which benefited me a lot. So I reproduce your code by pytorch and tried to visualize the attention map in part segmentation task. but when I want to use the right wing as query point, it can't attention the left wing like what you visualize on the paper. So I want to know how you show the visualization result like the paper.
In addition, other issue point out the dimension of softmax is wrong because your multiplication is Value*Attention, so I think the dimension of softmax in Attention would be 1, not -1(or 2), please correct me if there is any mistakes. And also the dimension of softmax and L1 norm is different(softmax is -1 but L1-Norm is 1), why? Line 211:
self.softmax = nn.Softmax(dim=-1)
Line 220:attention = attention / (1e-9 + attention.sum(dim=1, keepdims=True))
Also, I want to know how you do neighbor embedding in part segmentation. the paper said the number of output points is N, which means you didn't sampling the point and also do SG(sampling and grouping) module twice. but when I reproduce the same method, I got cuda out of memory in RTX 2080Ti(VRAM:12G). Is my VRAM not big enough or I have any problem with the understanding of the paper discription?
I'm looking forward to your reply, and thank you for your contribution.
Can you release the partial segmentation code reproduced by your pytorch? The effect of my own reproduction is very poor. If I can, I hope to see your reproduced code. Thank you very much
from pct.
Related Issues (20)
- About the license HOT 1
- About Segmentation Classification Label
- 关于seg文件中似乎没有local feature representation的问题 HOT 4
- How is the Global Feature concatenated with Point Feature in segmentation? HOT 3
- How is the positional embedding implemented? HOT 3
- Segmentation model detail HOT 1
- The parameter of the CosineAnnealingLR
- Visualization HOT 1
- Model Question HOT 2
- PCT code for the PartSeg
- Questions about segmentation HOT 4
- local feature
- About SA_Layer HOT 2
- Hi, I wonder if you have prepared to release the complete code, especially in segmentation? Thanks!
- Is the classification experiment conducted on aligned or not aligned ModelNet40?
- What is `self.pos_xyz(xyz)` HOT 2
- How do you consider about Layer normalization and Batch normalization?
- partseg and semseg
- The question of training
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pct.