Git Product home page Git Product logo

lasr's Introduction

LASR

Installation

Build with conda

conda env create -f lasr.yml
conda activate lasr
# install softras
# to compile for different GPU arch, see https://discuss.pytorch.org/t/compiling-pytorch-on-devices-with-different-cuda-capability/106409
pip install -e third_party/softras/
# install manifold remeshing
git clone --recursive [email protected]:hjwdzh/Manifold.git; cd Manifold; mkdir build; cd build; cmake .. -DCMAKE_BUILD_TYPE=Release;make -j8; cd ../../

For docker installation, please see install.md

Overview

We provide instructions for data preparation and shape optimization on three types of data,

  • Spot: Synthetic rendering of 3D meshes for debugging and evaluation
  • DAVIS-camsl: Video frames with ground-truth segmentation masks
  • Pika: Your own video

We recomend first trying spot and make sure the system works, and then run the rest two examples.

Data preparation

Create folders to store intermediate data and training logs

mkdir log; mkdir tmp; 

The following steps generates data in subfolders under ./database/DAVIS/.

Spot: synthetic data

Download and unzip the pre-computed {silhouette, flow, rgb} rendering of spot,

gdown https://drive.google.com/uc?id=11Y3WQ0Qd7W-6Wds1_A7KsTbaG7jrmG7N -O spot.zip
unzip spot.zip -d database/DAVIS/

Otherwise, you could render the same data locally by running,

python scripts/render_syn.py
DAVIS-camel: real video frames with segmentation

First, download DAVIS 2017 trainval set and copy JPEGImages/Full-Resolution and Annotations/Full-Resolution folders of DAVIS-camel into the according folders in database.

cp ...davis-path/DAVIS/Annotations/Full-Resolution/camel/ -rf database/DAVIS/Annotations/Full-Resolution/
cp ...davis-path/DAVIS-lasr/DAVIS/JPEGImages/Full-Resolution/camel/ -rf database/DAVIS/JPEGImages/Full-Resolution/

Then download pre-trained VCN optical flow:

mkdir ./lasr_vcn
gdown https://drive.google.com/uc?id=139S6pplPvMTB-_giI6V2dxpOHGqqAdHn -O ./lasr_vcn/vcn_rob.pth

Run VCN-robust to predict optical flow on DAVIS camel video:

bash preprocess/auto_gen.sh camel
Pika: your own video

You will need to install and clone detectron2 to obtain object segmentations as instructed below.

python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu110/torch1.7/index.html
git clone https://github.com/facebookresearch/detectron2

First, use any video processing tool (such as ffmpeg) to extract frames into JPEGImages/Full-Resolution/name-of-the-video.

mkdir database/DAVIS/JPEGImages/Full-Resolution/pika-tmp/
ffmpeg -ss 00:00:04 -i database/raw/IMG-7495.MOV -vf fps=10 database/DAVIS/JPEGImages/Full-Resolution/pika-tmp/%05d.jpg

Then, run pointrend to get segmentations:

cd preprocess
python mask.py pika ./detectron2; cd -

Assuming you have downloaded VCN flow in the previous step, run flow prediction:

bash preprocess/auto_gen.sh pika

Single video optimization

Spot Next, we want to optimize the shape, texture and camera parameters from image observartions. Optimizing spot takes ~20min on a single Titan Xp GPU.
bash scripts/spot3.sh

To render the optimized shape, texture and camera parameters

bash scripts/extract.sh spot3-1 10 1 26 spot3 no no
python render_vis.py --testdir log/spot3-1/ --seqname spot3 --freeze --outpath tmp/1.gif
DAVIS-camel

Optimize on camel observations.

bash scripts/template.sh camel

To render optimized camel

bash scripts/render_result.sh camel
Pika

Similarly, run the following steps to reconstruct pika

bash scripts/template.sh pika

To render reconstructed shape

bash scripts/render_result.sh pika
Monitor optimization

To monitor optimization, run

tensorboard --logdir log/

Example outputs

Evaluation

Run the following command to evaluate 3D shape accuracy for synthetic spot.

python scripts/eval_mesh.py --testdir log/spot3-1/ --gtdir database/DAVIS/Meshes/Full-Resolution/syn-spot3f/

Run the following command to evaluate keypoint accuracy on BADJA.

python scripts/eval_badja.py --testdir log/camel-5/ --seqname camel

Additional Notes

Optimize with ground-truth camera

We provide an example using synthetic spot data. Please run

bash scripts/spot3-gtcam.sh
Other videos in DAVIS/BAJDA

Please refer to data preparation and optimization of the camel example, and modify camel to other sequence names, such as dance-twirl. We provide config files the configs folder.

Synthetic articulated objects

To render and reproduce results on articulated objects (Sec. 4.2), you will need to purchase and download 3D models here. We use blender to export animated meshes and run rendera_all.py:

python scripts/render_syn.py --outdir syn-dog-15 --nframes 15 --alpha 0.5 --model dog

Optimize on rendered observations

bash scripts/dog15.sh

To render optimized dog

bash scripts/render_result.sh dog
Batchsize

The current codebase is tested with batchsize=4. Batchsize can be modified in scripts/template.sh. Note decreasing the batchsize will improive speed but reduce the stability.

Distributed training

The current codebase supports single-node multi-gpu training with pytorch distributed data-parallel. Please modify dev and ngpu in scripts/template.sh to select devices.

Acknowledgement

The code borrows the skeleton of CMR

External repos:

External data:

Citation

To cite our paper,

@inproceedings{yang2021lasr,
  title={LASR: Learning Articulated Shape Reconstruction from a Monocular Video},
  author={Yang, Gengshan 
      and Sun, Deqing
      and Jampani, Varun
      and Vlasic, Daniel
      and Cole, Forrester
      and Chang, Huiwen
      and Ramanan, Deva
      and Freeman, William T
      and Liu, Ce},
  booktitle={CVPR},
  year={2021}
}  

lasr's People

Contributors

deqings avatar gengshan-y avatar jason718 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lasr's Issues

OOM ERROR

Dear authors,
I met a cuda out of memory error, I used a RTX2080TI GPU, which had 11G
Which kind of GPU did you use?
Thanks

RuntimeError: CUDA error: invalid device ordinal

I used the docker to build the environment.
I prepared the DAVIS data and tried to run Optimize on camel observations.

Then, I got a CUDA error and the execution did not proceed.

Can you tell me the cause?

docker run -v $(pwd):/lasr --gpus all lasr bash -c 'cd lasr; source activate lasr; bash scripts/template.sh camel'
Jitting Chamfer 3D
Jitting Chamfer 3D
Loaded JIT 3D CUDA chamfer distance
Loaded JIT 3D CUDA chamfer distance
Traceback (most recent call last):
File "optimize.py", line 59, in
app.run(main)
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/absl/app.py", line 303, in run
_run_main(main, args)
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "optimize.py", line 40, in main
torch.cuda.set_device(opts.local_rank)
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/torch/cuda/init.py", line 263, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal

Pdb mode

Excuse me for asking again and again.

"bash scripts/spot3.sh".
After running the above code, terminal goes into Pdb mode.
What should I enter here?

OpenGL.error.GLError

I set up an environment in docker and tried to run the rendering code.

As a result, I got an OpenGL error. Is this due to a different version installed or something else?

"""
docker run -v $(pwd):/lasr --gpus all lasr bash -c 'cd lasr; source activate lasr; python render_vis.py --testdir log/spot3-1/ --seqname spot3 --freeze --outpath tmp/1.gif'

log/spot3-1/
syn-spot3f/0
syn-spot3f/1
0
Traceback (most recent call last):
File "render_vis.py", line 292, in
main()
File "render_vis.py", line 226, in main
r = OffscreenRenderer(img_size, img_size)
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/pyrender/offscreen.py", line 149, in _create
self._platform.init_context()
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/pyrender/platforms/egl.py", line 186, in init_context
self._egl_context = eglCreateContext(
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 402, in call
return self( *args, **named )
File "/anaconda3/envs/lasr/lib/python3.8/site-packages/OpenGL/error.py", line 228, in glCheckError
raise GLError(
OpenGL.error.GLError: GLError(
err = 12297,
baseOperation = eglCreateContext,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7f1e16d3a1c0>,
<OpenGL._opaque.EGLConfig_pointer object at 0x7f1e16d3a240>,
<OpenGL._opaque.EGLContext_pointer object at 0x7f1e16d85040>,
<OpenGL.arrays.lists.c_int_Array_7 object at 0x7f1e2fd5a940>,
),
result = <OpenGL._opaque.EGLContext_pointer object at 0x7f1e16d3a8c0>
)
"""

LASR fails for sequence of a person

Hello, I am looking to run LASR for a couple different scenes showing a single person. For one (RGB sequence: https://user-images.githubusercontent.com/6766142/126760093-b96c19ae-8e15-4cb6-8942-8ad0a420a2e5.mp4 LASR results: https://user-images.githubusercontent.com/6766142/126760220-8ceff0c3-03bd-432e-8d7a-0b1789112dc7.mp4), LASR works very well using the default parameters and symmetry disabled. However, for the other the method runs to completion, but produces invalid results. The RGB sequence is:
https://user-images.githubusercontent.com/6766142/126758853-57390ec1-966d-4488-979e-a1f92632bfb5.mp4

The results using default values (symmetry enabled) show a phantom copy and the mesh doesn't deform to match the mask:

vi_symm-vi-5-10.mp4

I disabled the symmetry and now the resulting mesh is an amorphous blob that doesn't even overlap the mask:

vi-vi-5-10.mp4

Monitoring the trends in tensorboard seem to show that everything proceeded well until the end of the first epoch, so I ran the method using only a single epoch which gives the best results so far (although somewhat reminiscent of a tadpole):

vi-vi_e1-5-10.mp4

I also tried with larger batchsizes as suggested in the readme (6 and 10), but this didn't seem to cause any difference in the results. I verified that the masks and flow fields didn't look vastly incorrect. I'm wondering if this is a known issue or that you might have an idea what has gone wrong for this scene. Thanks!

how to plot such figure?

hi,

can I know how to plot figure 2? Especially the colorful 3d mesh on the top right side of the figure 2? maybe which package you used. or which code snippets used by you in the repo. thank you!

bestanoy

Question on Flow preprocessing

Hi,

Thank you for open-sourcing your awesome work.

Could you explain on what is going on with the flow pre-processing below?

lasr/dataloader/vidbase.py

Lines 145 to 151 in 492fa41

flow[:,:,0] += (center[0]-length[0]) - (centern[0]-lengthn[0]) + betax*(alp-alpn)
flow[:,:,1] += (center[1]-length[1]) - (centern[1]-lengthn[1]) + betay*(alp-alpn)
flow /= alpn
flow[:,:,0] = 2 * (flow[:,:,0]/maxw)
flow[:,:,1] = 2 * (flow[:,:,1]/maxh)
flow[:,:,2] = np.logical_and(flow[:,:,2]!=0, occ<10) # as the valid pixels

Why is this preferred over a simple MSE penalty over raw flow fields?

Thanks!!

ModuleNotFoundError: No module named 'point_rend'

I ran
python mask.py pika path-to-detectron2-root; cd -

The following error has occurred. How can I solve this problem?

Traceback (most recent call last):
File "mask.py", line 45, in
import point_rend
ModuleNotFoundError: No module named 'point_rend'
/home/shiori/lasr-main

Bone Length

Nice work! I see Jb is the position of the center of the b-th bone (or Gaussian component). But do you need to define the bone length as well?

ninja: no work to do.

I started to build the conda environment again.
I get an output that ninja is not working properly, does this mean that the environment build was not successful?

cd third_party/softras; python setup.py install; cd -;

running install
running bdist_egg
running egg_info
writing soft_renderer.egg-info/PKG-INFO
writing dependency_links to soft_renderer.egg-info/dependency_links.txt
writing requirements to soft_renderer.egg-info/requires.txt
writing top-level names to soft_renderer.egg-info/top_level.txt
reading manifest file 'soft_renderer.egg-info/SOURCES.txt'
writing manifest file 'soft_renderer.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
building 'soft_renderer.cuda.load_textures' extension
Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/load_textures_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/load_textures_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/load_textures.cpython-38-x86_64-linux-gnu.so
building 'soft_renderer.cuda.create_texture_image' extension
Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/create_texture_image_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/create_texture_image_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/create_texture_image.cpython-38-x86_64-linux-gnu.so
building 'soft_renderer.cuda.soft_rasterize' extension
Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/soft_rasterize_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/soft_rasterize_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/soft_rasterize.cpython-38-x86_64-linux-gnu.so
building 'soft_renderer.cuda.voxelization' extension
Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/voxelization_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/voxelization_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/voxelization.cpython-38-x86_64-linux-gnu.so

LASR with known camera intrinsics/extrinsics

Hello, I would like to run LASR with known camera intrinsics & extrinsics. I believe this is already implemented, but I'm having some trouble understanding how to accomplish this myself. The mechanisms seem to be two-fold: with the use_gtpose option and providing per-frame camera files (parsing code here). Could you clarify the functionality of these mechanisms? I was unable to find an example that made use of either, but if I missed one or you have one, that would also be helpful.

Another thing that confuses me is the scaling of the scale (lol) when use_gtpose is set even though the focal length is assigned equivalently if the camera files are provided or not that makes me think these two mechanisms might have different purposes and I am incorrectly conflating them.

Any clarification you can provide would be much appreciated! Thanks!

No module named 'detectron2.config'

I built my environment with docker.
Therefore, I use the following command to get segmentations.

docker run -v $(pwd):/lasr --gpus all lasr bash -c 'cd lasr/detectron2; source activate lasr; python mask.py pika . /detectron2; cd -'

Then I get the following error.

Traceback (most recent call last):
File "mask.py", line 23, in
from detectron2.config import get_cfg
ModuleNotFoundError: No module named 'detectron2.config'

detectron2 is installed with a folder created in the parent directory of preprocess.
Is it because I am using docker that I am getting this error?

Clarification for Coarse-to-fine train step

Hi Gengshan,

I was looking at the paper and in that it was mentioned that the step S0 does not have any bones and we start with a sphere.

image

When I looked at the template.sh file

image

here it looks like the bones are initialized to 21 (B=20). I am slightly confused if we call this the step S0 because at the start we should set it to 1 (B=0). (A general trend I observed was n_faces did not align with the n_bones)

I am not sure if I am missing something here. I would really appreciate if you could help with this.
Thank you!

error [python scripts/render_syn.py]

I ran "python scripts/render_syn.py".

The following error has occurred. How can I solve this problem?

/home/kana/anaconda3/envs/lasr/lib/python3.8/site-packages/kornia/geometry/conversions.py:369: UserWarning: XYZW quaternion coefficient order is deprecated and will be removed after > 0.6. Please use QuaternionCoeffOrder.WXYZ instead.
warnings.warn("XYZW quaternion coefficient order is deprecated and"
/home/kana/anaconda3/envs/lasr/lib/python3.8/site-packages/kornia/geometry/conversions.py:506: UserWarning: XYZW quaternion coefficient order is deprecated and will be removed after > 0.6. Please use QuaternionCoeffOrder.WXYZ instead.
warnings.warn("XYZW quaternion coefficient order is deprecated and"

Flipped flow maps in the flow loss

Hi Gengshan,

Thanks for the great work.

I noticed that you flip the flows before saving in autogen.py

lasr/preprocess/auto_gen.py

Lines 173 to 178 in 29d8759

write_pfm('%s/FlowFW/flo-%05d.pfm'% (seqname,ix ),flowfw[::-1].astype(np.float32))
write_pfm('%s/FlowFW/occ-%05d.pfm'% (seqname,ix ),occfw[::-1].astype(np.float32))
write_pfm('%s/FlowBW/flo-%05d.pfm'% (seqname,ix+1),flowbw[::-1].astype(np.float32))
write_pfm('%s/FlowBW/occ-%05d.pfm'% (seqname,ix+1),occbw[::-1].astype(np.float32))
cv2.imwrite('%s/JPEGImages/%05d.jpg'% (seqname,ix), imgL_o[:,:,::-1])
cv2.imwrite('%s/JPEGImages/%05d.jpg'% (seqname,ix+1), imgR_o[:,:,::-1])

Is there a good reason to do this?
An unintended consequence is that it leads to flipped flows being loaded at training time and the flow loss ends up being wrong.

Here's an example of flow error being logged while running lasr on Camel example. As you can see, the ground truth flow here is flipped
image

Does the env yaml file not works on windows?

I cloned your repo and tried to create env by
conda env create -f lasr.yml
then I got following messages.
Solving environment: failed

ResolvePackageNotFound:
  - lz4-c==1.9.3=h2531618_0
  - cudatoolkit==11.0.221=h6bb024c_0
  - ca-certificates==2021.5.30=ha878542_0
  - openssl==1.1.1k=h27cfd23_0
  - libwebp-base==1.2.0=h27cfd23_0
  - tk==8.6.10=hbc83047_0
  - numpy==1.20.2=py38h2d18471_0
  - sqlite==3.35.4=hdfb4753_0
  - numpy-base==1.20.2=py38hfae3a4d_0
  - jpeg==9b=h024ee3a_2
  - freetype==2.10.4=h5ab3b9f_0
  - intel-openmp==2021.2.0=h06a4308_610
  - mkl_random==1.2.1=py38ha9443f7_2
  - pytorch3d==0.4.0=py38_cu110_pyt171
  - pyyaml==5.3.1=py38h8df0ef7_1
  - zstd==1.4.9=haebb681_0
  - ld_impl_linux-64==2.33.1=h53a641e_7
  - pytorch==1.7.1=py3.8_cuda11.0.221_cudnn8.0.5_0
  - readline==8.1=h27cfd23_0
  - xz==5.2.5=h7b6447c_0
  - libtiff==4.2.0=h85742a9_0
  - lcms2==2.12=h3be6417_0
  - cudatoolkit-dev=11.0.3
  - ncurses==6.2=he6710b0_1
  - zlib==1.2.11=h7b6447c_3
  - mkl_fft==1.3.0=py38h42c9631_2
  - mkl-service==2.3.0=py38h27cfd23_1
  - libgcc-ng=9.1.0
  - libffi==3.3=he6710b0_2
  - libstdcxx-ng==9.1.0=hdf63c60_0

when I delete the build then error disappears, but I'm not sure this is right way.

Is this repo is not compatible with windows?

Question about the flatten loss

Dear Authors,

Thank you so much for the great work.
While reading your source code, I found there is a flatten loss here. This loss is not discussed in the paper and it is also not well explained in the code. Can you explain what this loss is about? Thank you very much!

Best,
Xianghui

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.