thmoa / octopus Goto Github PK
View Code? Open in Web Editor NEWThis repository contains code corresponding to the paper Learning to Reconstruct People in Clothing from a Single RGB Camera.
This repository contains code corresponding to the paper Learning to Reconstruct People in Clothing from a Single RGB Camera.
When the program loads the downloaded weight file, it reports the error that
“ValueError: Layer #5 (named "conv2d_1"), weight <tf.Variable 'conv2d_1/kernel:0' shape=(3, 3, 8, 16) dtype=float32> has shape (3, 3, 8, 16), but the saved weight has shape (8, 3, 3, 3)”
What is the cause?
Are the camera parameters fixed?If they are fixed,what are these parameters?
asking these purely out of curiosity, to fine tune my understanding:
a) will using the SMPL gender specific models be better than using the "neutral model" as suggested?
b) will using the new SMPL-X (expressive) model in anyway improve accuracy further over the existing approach?
c) since Octopus generates the final OBJ using data from several poses, compared to SMPL-X which uses a single pic, is Octopus approach more accurate in terms of body shape and joints error?
I use tensorflow-gpu 1.14.0, keras 2.2.4, opencv-python 4.4.0.46. But it reports:
Using TensorFlow backend.
Traceback (most recent call last):
File "infer_single.py", line 9, in
from model.octopus import Octopus
File "/home/fyytim/octopus/octopus/model/octopus.py", line 206
self.laplacian = Lambda(lambda (v0, v1): compute_laplacian_diff(v0, v1, self.faces), name='laplacian')(
^
SyntaxError: invalid syntax
Hi,
i have created a 3d model with octopus, but when i import it to Maya to paste the texture onto it i cannot find the UVs that would map the texture to the model.
Thanks for your response.
Hi, how do you get the file J_regressor.pkl and face_regressor.pkl ?
Thank you for sharing this great work.
I recently ran into the following issue:
infer_batch.py
on 3 sets of imagesinfer_batch.py
executed successfully with three meshesTo debug the problem out of the box, I compared the three meshes with the output of infer_single.py
(using the same default arguments, i.e. same optimisation steps). I found that only the firstly executed mesh of the infer_batch.py
output is identical to that of infer_single.py
.
Does it mean that the model's weights are updated after the first execution? Otherwise, what could be the reason of such behaviour?
Thanks
Have you had a chance to release the LifeScans dataset?
Hello! I am interested in the dataset LifeScans mentioned in your paper. So I want to know when can you provide the training code and dataset?
How to get the 1826 scans provided by Twindom.
Hi,
Awesome code!
Actually running the bash script is giving "ValueError: Incompatible shapes between op input and calculated input gradient."
I tried running both the scripts and the same error comes!
Environment:
OS - Ubuntu 16.04
Tensorflow_gpu: 1.12.2
Keras: 2.2.4
Optimizing for Pose is running fine, but Optimizing for Shape is giving error.
Any help would be great!
Thanks.
Hello,
Why the scale factor computed with "1.66 / body_height"? what is the reason for 1.66?
I did not found any mention of it in the paper
I used the environment python3.7, tensorflow2.4.0, keras2.4.3
when i solved all the incompatibable things, it came out
ValueError: Missing data for input "posetrans_init". You passed a data dictionary with keys ['image_0', 'J_2d_0', 'image_1', 'J_2d_1', 'image_2', 'J_2d_2', 'image_3', 'J_2d_3', 'image_4', 'J_2d_4', 'image_5', 'J_2d_5', 'image_6', 'J_2d_6', 'image_7', 'J_2d_7']. Expected the following keys: ['image_0', 'image_1', 'image_2', 'image_3', 'image_4', 'image_5', 'image_6', 'image_7', 'J_2d_0', 'J_2d_1', 'J_2d_2', 'J_2d_3', 'J_2d_4', 'J_2d_5', 'J_2d_6', 'J_2d_7', 'posetrans_init']
here are the codes
`def opt_pose(self, segmentations, joints_2d, opt_steps):
data = {}
supervision = {}
for i in range(self.num):
data['image_{}'.format(i)] = np.tile(
np.float32(segmentations[i].reshape((1, self.img_size, self.img_size, -1))),
(opt_steps, 1, 1, 1)
)
data['J_2d_{}'.format(i)] = np.tile(
np.float32(np.expand_dims(joints_2d[i], 0)),
(opt_steps, 1, 1)
)
supervision['J_reproj_{}'.format(i)] = np.tile(
np.float32(np.expand_dims(joints_2d[i], 0)),
(opt_steps, 1, 1)
)
with tqdm(total=opt_steps) as pbar:
self.opt_pose_model.fit(
data, supervision,
batch_size=1, epochs=1, verbose=0,
callbacks=[LambdaCallback(on_batch_end=lambda e, l: pbar.update(1))]
)`
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_problems
how deal with this issue?
With all requirements met, I successfully setup the Octopus in Ubuntu Docker.
Dirt is ok
numpy is ok
scipy is ok
tensorflow-gpu is ok
keras is ok
(all components keep minimal version requirement)
But run_demo.sh report the following error:
root@28146d26637a:/home/thmoa/octopus# ./run_demo.sh
Using TensorFlow backend.
2019-07-22 12:23:48.700476: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-07-22 12:23:48.701614: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.645
pciBusID: 0000:05:00.0
totalMemory: 10.92GiB freeMemory: 10.77GiB
2019-07-22 12:23:48.701679: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-07-22 12:23:49.266255: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-07-22 12:23:49.266328: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-07-22 12:23:49.266348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-07-22 12:23:49.266983: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10420 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1)
Traceback (most recent call last):
File "infer_single.py", line 89, in
main(args.weights, args.name, args.segm_dir, args.pose_dir, args.out_dir, args.opt_steps_pose, args.opt_steps_shape)
File "infer_single.py", line 21, in main
model = Octopus(num=len(segm_files))
File "/home/thmoa/octopus/model/octopus.py", line 197, in init
smpls = [NameLayer('smpl_{}'.format(i))(smpl([p, self.betas, t, self.offsets])) for i, (p, t) in enumerate(zip(self.poses, self.ts))]
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 460, in call
output = self.call(inputs, **kwargs)
TypeError: call() takes exactly 5 arguments (2 given)
Is there any hint that I missed for the version?
hi,I got this error when run run_demo.sh, does it require specific tensorflow version ? I'm using tensorflow 1.5.0 and keras 2.1.4, os is ubuntu 16.04.
Optimizing for pose...
0%| | 0/5 [00:00<?, ?it/s]
Traceback (most recent call last):
File "infer_single.py", line 85, in
main(args.weights, args.name, args.segm_dir, args.pose_dir, args.out_dir, args.opt_steps_pose, args.opt_steps_shape)
File "infer_single.py", line 34, in main
model.opt_pose(segmentations, joints_2d, opt_steps=opt_pose_steps)
File "/home/bodymesh/octopus-master/model/octopus.py", line 290, in opt_pose
callbacks=[LambdaCallback(on_batch_end=lambda e, l: pbar.update(1))]
File "/home/bodymesh/octopus-master/test/local/lib/python2.7/site-packages/keras/engine/training.py", line 1689, in fit
self._make_train_function()
File "/home/bodymesh/octopus-master/test/local/lib/python2.7/site-packages/keras/engine/training.py", line 990, in _make_train_function
loss=self.total_loss)
File "/home/bodymesh/octopus-master/test/local/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/bodymesh/octopus-master/test/local/lib/python2.7/site-packages/keras/optimizers.py", line 440, in get_updates
grads = self.get_gradients(loss, params)
File "/home/bodymesh/octopus-master/test/local/lib/python2.7/site-packages/keras/optimizers.py", line 78, in get_gradients
grads = K.gradients(loss, params)
File "/home/bodymesh/octopus-master/test/local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2512, in gradients
return tf.gradients(loss, variables, colocate_gradients_with_ops=True)
File "/home/bodymesh/octopus-master/test/local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 609, in gradients
grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
File "/home/bodymesh/octopus-master/test/local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 375, in _MaybeCompile
return grad_fn() # Exit early
File "/home/bodymesh/octopus-master/test/local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 609, in
grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
File "/home/bodymesh/octopus-master/test/local/lib/python2.7/site-packages/tensorflow/python/ops/linalg_grad.py", line 275, in _SvdGrad
"SVD gradient is not implemented for compute_uv=True and "
NotImplementedError: SVD gradient is not implemented for compute_uv=True and full_matrices=False.
Optimizing for pose... 100%|██████████| 5/5 [00:20<00:00, 4.97s/it] Optimizing for shape... 0%| | 0/15 [00:00<?, ?it/s] Traceback (most recent call last): File "infer_single.py", line 89, in <module> main(args.weights, args.name, args.segm_dir, args.pose_dir, args.out_dir, args.opt_steps_pose, args.opt_steps_shape) File "infer_single.py", line 42, in main model.opt_shape(segmentations, joints_2d, face_2d, opt_steps=opt_shape_steps) File "/home/yaolin/Documents/octopus/model/octopus.py", line 329, in opt_shape callbacks=[LambdaCallback(on_batch_begin=lambda e, l: pbar.update(1))] File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1010, in fit self._make_train_function() File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 509, in _make_train_function loss=self.total_loss) File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/keras/optimizers.py", line 475, in get_updates grads = self.get_gradients(loss, params) File "/usr/local/lib/python2.7/dist-packages/keras/optimizers.py", line 89, in get_gradients grads = K.gradients(loss, params) File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 2757, in gradients return tf.gradients(loss, variables, colocate_gradients_with_ops=True) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 630, in gradients gate_gradients, aggregation_method, stop_gradients) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 848, in _GradientsHelper (op.name, i, t_in.shape, in_grad.shape)) ValueError: Incompatible shapes between op input and calculated input gradient. Forward operation: render_layer_7/render_batch. Input index: 0. Original input shape: (?, 1080, 1080, 1). Calculated input gradient shape: (?, 32766, 8, 0)
I know you have mentioned that it is not planned to share the training code because of the dataset license.
But is it possible to publish only the training code (and maybe only the structure of the train dataset), without sharing the data itself? It will be certainly useful, even if the training code is not as clean as the current repo 😄
will@will-pc:~/octopus$ bash run_demo.sh
Using TensorFlow backend.
2019-05-14 06:59:46.219620: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-05-14 06:59:46.294264: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-05-14 06:59:46.294849: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x33310a0 executing computations on platform CUDA. Devices:
2019-05-14 06:59:46.294862: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
2019-05-14 06:59:46.296246: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 4200000000 Hz
2019-05-14 06:59:46.296956: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x3399d30 executing computations on platform Host. Devices:
2019-05-14 06:59:46.296970: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2019-05-14 06:59:46.297347: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.645
pciBusID: 0000:01:00.0
totalMemory: 10.92GiB freeMemory: 10.50GiB
2019-05-14 06:59:46.297357: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-14 06:59:46.298031: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-14 06:59:46.298043: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-05-14 06:59:46.298050: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-05-14 06:59:46.298429: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10213 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:From /home/will/octopus/smpl/batch_lbs.py:83: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/will/dirt/dirt/matrices.py:40: calling norm (from tensorflow.python.ops.linalg_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
Optimizing for pose...
0%| | 0/5 [00:00<?, ?it/s]2019-05-14 07:00:18.863394: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
2019-05-14 07:00:24.630585: I tensorflow/core/kernels/cuda_solvers.cc:159] Creating CudaSolver handles for stream 0x33e9260
100%|██████████| 5/5 [00:23<00:00, 5.56s/it]
Optimizing for shape...
0%| | 0/15 [00:00<?, ?it/s]WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/sparse_grad.py:113: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
7%|▋ | 1/15 [00:13<03:05, 13.28s/it]2019-05-14 07:01:00.121440: I /home/will/dirt/csrc/gl_common.h:66] selected egl device #0 to match cuda device #0 for thread 0x7fdc1e480700
run_demo.sh: line 2: 4372 Segmentation fault (core dumped) sudo python infer_single.py sample data/sample/segmentations data/sample/keypoints --out_dir out
env: python2.7, tensorflow 1.13.1 gpu version.
could anyone share the ideas? thanks.
Hi,
I use :
Ubuntu 18.04.3 LTS
Python 2.7.15+
cuda_10.0
libcudnn7_7.6.2.24-1+cuda10.0_amd64
For last installation but again couldn't run!
But got error.
Do you provide tested configuration because my problem how i can make compatible tensorflow with Dirt and with this code! could you provide test configuration?
I tested many times on Ubuntu 16.4 and two version of Cuda and now on Ubuntu 18.4 and Cuda 10.0 because of Tensorflow 2.7
Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W0820 20:38:20.909133 140312402794304 deprecation_wrapper.py:119] From infer_single.py:19: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
W0820 20:38:20.909305 140312402794304 deprecation_wrapper.py:119] From infer_single.py:19: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
W0820 20:38:20.909396 140312402794304 deprecation_wrapper.py:119] From infer_single.py:19: The name tf.GPUOptions is deprecated. Please use tf.compat.v1.GPUOptions instead.
2019-08-20 20:38:20.914915: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2019-08-20 20:38:21.030470: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-20 20:38:21.031025: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5619f8538d30 executing computations on platform CUDA. Devices:
2019-08-20 20:38:21.031051: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce GTX 1080, Compute Capability 6.1
2019-08-20 20:38:21.052203: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3411290000 Hz
2019-08-20 20:38:21.052807: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5619f97f7650 executing computations on platform Host. Devices:
2019-08-20 20:38:21.052840: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): ,
2019-08-20 20:38:21.052988: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-20 20:38:21.053602: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
2019-08-20 20:38:21.053939: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2019-08-20 20:38:21.055101: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2019-08-20 20:38:21.056089: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0
2019-08-20 20:38:21.056417: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0
2019-08-20 20:38:21.057829: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0
2019-08-20 20:38:21.058938: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0
2019-08-20 20:38:21.062348: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-08-20 20:38:21.062515: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-20 20:38:21.063226: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-20 20:38:21.063829: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-08-20 20:38:21.063948: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2019-08-20 20:38:21.065663: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-20 20:38:21.065683: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-08-20 20:38:21.065694: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-08-20 20:38:21.065980: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-20 20:38:21.066705: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-08-20 20:38:21.067362: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7327 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
W0820 20:38:21.068140 140312402794304 deprecation_wrapper.py:119] From /home/admin/VFS/venv/local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
W0820 20:38:21.082931 140312402794304 deprecation.py:323] From /home/admin/VFS/octopus/smpl/batch_lbs.py:83: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
Traceback (most recent call last):
File "infer_single.py", line 89, in
main(args.weights, args.name, args.segm_dir, args.pose_dir, args.out_dir, args.opt_steps_pose, args.opt_steps_shape)
File "infer_single.py", line 21, in main
model = Octopus(num=len(segm_files))
File "/home/admin/VFS/octopus/model/octopus.py", line 80, in init
pose = tf.reshape(batch_rodrigues(pose_raw.reshape(-1, 3).astype(np.float32)), (-1, ))
File "/home/admin/VFS/octopus/smpl/batch_lbs.py", line 93, in batch_rodrigues
r, batch_size=batch_size)
File "/home/admin/VFS/octopus/smpl/batch_lbs.py", line 51, in batch_skew
with tf.name_scope("batch_skew", [vec]):
File "/home/admin/VFS/venv/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 6450, in init
"pass this into the values
kwarg." % type(default_name))
TypeError: default_name
type (<type 'list'>) is not a string type. You likely meant to pass this into the values
kwarg.
hey,guys! My system is ubuntu 18.04, I create a virtualenv for dirt installation: cudatoolkit:10.0.130; cudnn:7.6.5; python:3.7; tensorflow and -gpu :1.13; scipy:1.5; numpy:1.16. And i am sure the tensorflow-gpu is working.
But,when i run pip install . or pip install -e . it will uninstall tensorflow and -gpu=1.13 automatically and change them into 2.9. Why? Is there anyone know the answer? Thank you and looking forward you guys replying!
Besides, hen i run pip install . or pip install -e . it always occur this error below:
Pls let me know the number of GOU required and GOU memory for running single and multiple images
In your paper you mention that you re-trained the predictor to be able to process binary segmentation masks for cases with minimal clothing.
Could you share the weights of that version?
My configuration:
I have installed dirt and have passed the test.
I then run 'run-demo.sh'
The following error occurs:
Using TensorFlow backend.
2019-07-07 15:22:11.203873: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
2019-07-07 15:22:11.420539: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 0 with properties:
name: GeForce RTX 2080 major: 7 minor: 5 memoryClockRate(GHz): 1.71
pciBusID: 0000:17:00.0
totalMemory: 7.77GiB freeMemory: 7.65GiB
2019-07-07 15:22:11.513161: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 1 with properties:
name: GeForce RTX 2080 major: 7 minor: 5 memoryClockRate(GHz): 1.71
pciBusID: 0000:65:00.0
totalMemory: 7.76GiB freeMemory: 6.96GiB
2019-07-07 15:22:11.513252: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1227] Device peer to peer matrix
2019-07-07 15:22:11.513303: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1233] DMA: 0 1
2019-07-07 15:22:11.513310: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1243] 0: Y N
2019-07-07 15:22:11.513317: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1243] 1: N Y
2019-07-07 15:22:11.513328: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0, 1
2019-07-07 15:22:11.970348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7371 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080, pci bus id: 0000:17:00.0, compute capability: 7.5)
2019-07-07 15:22:11.970756: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 6703 MB memory) -> physical GPU (device: 1, name: GeForce RTX 2080, pci bus id: 0000:65:00.0, compute capability: 7.5)
WARNING:tensorflow:From /home/yifu/workspace/dirt/dirt/matrices.py:40: calling norm (from tensorflow.python.ops.linalg_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
Optimizing for pose...
0%| | 0/5 [00:00<?, ?it/s]2019-07-07 15:22:34.709224: I tensorflow/core/kernels/cuda_solvers.cc:159] Creating CudaSolver handles for stream 0x55dab8c80d60
2019-07-07 15:22:34.967870: E tensorflow/stream_executor/cuda/cuda_blas.cc:635] failed to run cuBLAS routine cublasSgemmBatched: CUBLAS_STATUS_EXECUTION_FAILED
2019-07-07 15:22:34.967897: E tensorflow/stream_executor/cuda/cuda_blas.cc:2404] Internal: failed BLAS call, see log for details
2019-07-07 15:22:34.988828: I tensorflow/stream_executor/stream.cc:4624] stream 0x55dac946b440 did not memcpy device-to-host; source: 0x7f3bcc12a300
2019-07-07 15:22:35.006599: I tensorflow/stream_executor/stream.cc:4624] stream 0x55dac946b440 did not memcpy device-to-host; source: 0x7f3bcc064c00
2019-07-07 15:22:35.025165: I tensorflow/stream_executor/stream.cc:4624] stream 0x55dac946b440 did not memcpy device-to-host; source: 0x7f3cc175a800
2019-07-07 15:22:35.044649: I tensorflow/stream_executor/stream.cc:4624] stream 0x55dac946b440 did not memcpy device-to-host; source: 0x7f3bcc064c00
2019-07-07 15:22:35.063752: I tensorflow/stream_executor/stream.cc:4624] stream 0x55dac946b440 did not memcpy device-to-host; source: 0x7f3bcc01f400
2019-07-07 15:22:35.081809: I tensorflow/stream_executor/stream.cc:4624] stream 0x55dac946b440 did not memcpy device-to-host; source: 0x7f3bcc01f500
2019-07-07 15:22:35.100634: I tensorflow/stream_executor/stream.cc:4624] stream 0x55dac946b440 did not memcpy device-to-host; source: 0x7f3bcc064f00
Any tips or advice? would be also helpful if the recommended version of all configurations (Python, Tensorflow, Cuda) can be given by the author. Thanks a lot!
Hi, thank you for releasing the code for the paper @thmoa . I've cloned your code successfully on my ubuntu server. But when I run the infer_single.py, it shows an ImportError: No module named mesh.mesh.
Using TensorFlow backend.
Traceback (most recent call last):
File "./infer_single.py", line 88, in <module>
main(args.weights, args.name, args.segm_dir, args.pose_dir, args.out_dir, args.opt_steps_pose, args.opt_steps_shape)
File "./infer_single.py", line 17, in main
model = Octopus(num=len(segm_files))
File "/home/***/***/octopus/model/octopus.py", line 169, in __init__
sampling = pkl.load(f)
ImportError: No module named mesh.mesh
Hi~ Thanks for your great work.
When I ran the code, I got the following error.
WARNING:tensorflow:From /home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Traceback (most recent call last):
File "infer_single.py", line 89, in <module>
main(args.weights, args.name, args.segm_dir, args.pose_dir, args.out_dir, args.opt_steps_pose, args.opt_steps_shape)
File "infer_single.py", line 21, in main
model = Octopus(num=len(segm_files))
File "/home/frank/PycharmProjects/octopus/model/octopus.py", line 186, in __init__
conv_l3 = GraphConvolution(32, tf_A[3], activation='relu', name='conv_l3', trainable=False)(shape_features)
File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/keras/engine/base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "/home/frank/PycharmProjects/octopus/graphconv/graphconvlayer.py", line 43, in call
output = supports[0]
IndexError: list index out of range
Plus, can you provide more environment information for setting up?
For example, the python version, exact tensorflow-gpu and keras version. (Some parts of tensorflow-gpu are now changed I think.)
Thanks in advance!
My basic environment:
tensorflow-gpu = 1.11.0
keras = 2.2.4 (which is slightly above 2.2.0)
And dirt is ok, compiled from source.
I am using conda and python version is 2.7.16.
While running the test script file run_demo.sh, it reports the following error:
I guess maybe the keras version difference cause this?
Hello!
I'm trying to build and use the octopus on the following configuration:
I successfully built the dirt using these commands
git clone https://github.com/pmh47/dirt.git
cd dirt
mkdir build ; cd build
vim ../csrc/CMakeLists.txt
#add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0)
cmake ../csrc
vim CMakeFiles/rasterise.dir/flags.make
#add CUDA_FLAGS = -DNDEBUG
make
cd ..
pip install -e .
The dirt tests (lighting_tests.py and square_test.py) passed without any errors.
And when I'm trying to run the octopus tests at the end I receive
...
2019-04-25 07:57:54.313242: I /opt/dirt/csrc/gl_common.h:84] successfully created new GL context on thread 0x7f371f9f6700 (EGL = 1.5, GL = 4.6.0 NVIDIA 410.48, renderer = GeForce RTX 2080 Ti/PCIe/SSE2)
2019-04-25 07:57:54.322804: I /opt/dirt/csrc/rasterise_egl.cpp:266] reinitialised framebuffer with size 1080 x 1080
2019-04-25 07:57:54.333154: I /opt/dirt/csrc/gl_common.h:66] selected egl device #0 to match cuda device #0 for thread 0x7f371c9bd700
2019-04-25 07:57:54.360844: I /opt/dirt/csrc/gl_common.h:84] successfully created new GL context on thread 0x7f371c9bd700 (EGL = 1.5, GL = 4.6.0 NVIDIA 410.48, renderer = GeForce RTX 2080 Ti/PCIe/SSE2)
2019-04-25 07:57:54.366606: F /opt/dirt/csrc/rasterise_grad_egl.cpp:194] cudaGraphicsGLRegisterImage failed: cudaErrorNotSupported
run_demo.sh: line 2: 27784 Aborted (core dumped) python infer_single.py sample data/sample/segmentations data/sample/keypoints --out_dir out
what can be a problem?
P.S.: My
ls -l /usr/lib/ * / * GL *
-rw-r--r-- 1 root root 67900 мая 23 2018 /usr/lib/girepository-1.0/GstGL-1.0.typelib
lrwxrwxrwx 1 root root 20 фев 9 01:02 /usr/lib/x86_64-linux-gnu/libEGL_mesa.so.0 -> libEGL_mesa.so.0.0.0
-rw-r--r-- 1 root root 242840 фев 9 01:02 /usr/lib/x86_64-linux-gnu/libEGL_mesa.so.0.0.0
lrwxrwxrwx 1 root root 23 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.0 -> libEGL_nvidia.so.410.48
-rwxr-xr-x 1 root root 1031552 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.410.48
lrwxrwxrwx 1 root root 41 апр 25 07:40 /usr/lib/x86_64-linux-gnu/libEGL.so -> /usr/lib/x86_64-linux-gnu/libEGL.so.1.0.0
lrwxrwxrwx 1 root root 41 апр 25 08:48 /usr/lib/x86_64-linux-gnu/libEGL.so.1 -> /usr/lib/x86_64-linux-gnu/libEGL.so.1.0.0
-rw-r--r-- 1 root root 80448 авг 15 2018 /usr/lib/x86_64-linux-gnu/libEGL.so.1.0.0
lrwxrwxrwx 1 root root 22 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGLdispatch.so -> libGLdispatch.so.0.0.0
lrwxrwxrwx 1 root root 22 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libGLdispatch.so.0 -> libGLdispatch.so.0.0.0
-rw-r--r-- 1 root root 612792 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGLdispatch.so.0.0.0
lrwxrwxrwx 1 root root 29 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.1 -> libGLESv1_CM_nvidia.so.410.48
-rwxr-xr-x 1 root root 60200 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.410.48
lrwxrwxrwx 1 root root 21 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGLESv1_CM.so -> libGLESv1_CM.so.1.0.0
lrwxrwxrwx 1 root root 21 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGLESv1_CM.so.1 -> libGLESv1_CM.so.1.0.0
-rw-r--r-- 1 root root 43328 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGLESv1_CM.so.1.0.0
lrwxrwxrwx 1 root root 26 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.2 -> libGLESv2_nvidia.so.410.48
-rwxr-xr-x 1 root root 111400 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.410.48
lrwxrwxrwx 1 root root 18 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGLESv2.so -> libGLESv2.so.2.0.0
lrwxrwxrwx 1 root root 18 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libGLESv2.so.2 -> libGLESv2.so.2.0.0
-rw-r--r-- 1 root root 72000 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGLESv2.so.2.0.0
-rw-r--r-- 1 root root 671 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libGL.la
lrwxrwxrwx 1 root root 14 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGL.so -> libGL.so.1.0.0
lrwxrwxrwx 1 root root 40 апр 25 07:36 /usr/lib/x86_64-linux-gnu/libGL.so.1 -> /usr/lib/x86_64-linux-gnu/libGL.so.1.0.0
-rw-r--r-- 1 root root 567624 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGL.so.1.0.0
lrwxrwxrwx 1 root root 15 апр 24 20:17 /usr/lib/x86_64-linux-gnu/libGLU.so.1 -> libGLU.so.1.3.1
-rw-r--r-- 1 root root 453352 мая 22 2016 /usr/lib/x86_64-linux-gnu/libGLU.so.1.3.1
lrwxrwxrwx 1 root root 23 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libGLX_indirect.so.0 -> libGLX_nvidia.so.410.48
lrwxrwxrwx 1 root root 20 фев 9 01:02 /usr/lib/x86_64-linux-gnu/libGLX_mesa.so.0 -> libGLX_mesa.so.0.0.0
-rw-r--r-- 1 root root 479992 фев 9 01:02 /usr/lib/x86_64-linux-gnu/libGLX_mesa.so.0.0.0
lrwxrwxrwx 1 root root 23 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.0 -> libGLX_nvidia.so.410.48
-rwxr-xr-x 1 root root 1270576 апр 25 05:54 /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.410.48
lrwxrwxrwx 1 root root 15 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGLX.so -> libGLX.so.0.0.0
lrwxrwxrwx 1 root root 41 апр 25 08:49 /usr/lib/x86_64-linux-gnu/libGLX.so.0 -> /usr/lib/x86_64-linux-gnu/libGLX.so.0.0.0
-rw-r--r-- 1 root root 68144 авг 15 2018 /usr/lib/x86_64-linux-gnu/libGLX.so.0.0.0
lrwxrwxrwx 1 root root 18 авг 15 2018 /usr/lib/x86_64-linux-gnu/libOpenGL.so -> libOpenGL.so.0.0.0
lrwxrwxrwx 1 root root 18 авг 15 2018 /usr/lib/x86_64-linux-gnu/libOpenGL.so.0 -> libOpenGL.so.0.0.0
-rw-r--r-- 1 root root 186688 авг 15 2018 /usr/lib/x86_64-linux-gnu/libOpenGL.so.0.0.0
run_demo.sh code:
#!/usr/bin/env bash
python3 infer_single.py sample data/sample/segmentations data/sample/keypoints --out_dir out
Error:
Traceback (most recent call last):
File "infer_single.py", line 9, in <module>
from model.octopus import Octopus
File "/media/r/edata/code/nns/pose_det/octopus/model/octopus.py", line 206
self.laplacian = Lambda(lambda (v0, v1): compute_laplacian_diff(v0, v1, self.faces), name='laplacian')(
^
SyntaxError: invalid syntax
Any tips / suggestions? I'm using python3.7 and Ubuntu 19.04 and had to install dirt directly but it worked.
Hi ,
Can you please set up this repository in google colab , so that it can consider all the necessary requirements as implementing dirt is very complicated and there are lot of version mismatches
Thanks
Do it need fix body height?
Hello,
First of all, thank you for your open source project!
When I run both bash run_demo.sh and bash run_batch_demo.sh, It will report an error:
TypeError: init() takes 2 positional arguments but 4 were given
How can I solve it? Thank you in advance!
Hello,
Thank you for this work. I am trying to get the SMPL offsets estimation and I have not figured out how can I extract them in a file. Please, can you tell me how can I obtain them?
Thank you,
OS : ubuntu 16
python3
Getting error in loading of pickle file(assets/smpl_sampling.pkl). giving No module named 'numpy.core._multiarray_umath'.
'models/octopus.py line 169'
I used octopus to generate the vertices in each frame, however, the focal length there was some default value [1080,1080]. So the vertices are not in the real relative position to camera and the generated visibility map gives horrible result. Can you change the code for octopus to take real camera focal length?
Hi thmoa,I have been reproducing your paper recently, and I want to know some training details, because what you said in the paper is not very clear.
Hi @thmoa , fantastic work! Since I want adopt it as a baseline, I wonder if you can release the training code?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.