Git Product home page Git Product logo

fluidnet's Introduction

FluidNet

alt text

This repo contains all the code required to replicate the paper:

Accelerating Eulerian Fluid Simulation With Convolutional Networks, Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, Ken Perlin.

The workflow is:

  1. Generate the training data (simulation takes a few days for the 3D dataset).
    • Download the models and voxelize them.
    • Run mantaflow to generate fluid data.
  2. Train the network in Torch7 (training takes about 1-2 days).
    • Train and validate the model.
    • (optional) Run 3D example script to create videos from the paper.
    • (optional) Run 2D real-time demo.

The limitations of the current system are outlined at the end of this doc. Please read it before even considering integrating our code.

Note: This is not an official Google product.

UPDATES / NEWS:

Oct 14 2019

  • Added a pre-trained 2D model to the repo (for use with the realtime demo). Use this if you just want to visualize some output running in realtime (skipping data creation, model training, etc). See realtime demo instructions below.

Oct 13 2019

  • Verified instructions were still up to date. torch7 does not work with CUDA SDK newer than 9.2 (so this is what you must use). If you're using CUDA 10.0 or 10.1, you need to downgrade. For CUDNN make sure you install v7.6.4 for CUDA 9.2. Additionally, CUDA 9.2 does not support gcc>7 AND there seems to be an issue with half float support at cutorch head. I had to install torch using:
    ./clean.sh
    export TORCH_NVCC_FLAGS="-D__CUDA_NO_HALF_OPERATORS__"
    export CC=/usr/bin/gcc-7
    export CXX=/usr/bin/g++-7
    ./install.sh'
    
    See here for more cuda 9.2 debug help. In addition to get cudnn to work with v7.6.4 I used:
    git clone https://github.com/soumith/cudnn.torch.git -b R7 && cd cudnn.torch
    && luarocks make cudnn-scm-1.rockspec
    
    As discussed here.
  • Fixed a few compile issues with tfluids: missing vector and changes to THCudaTensor_norm as pointed out in issue 10.

Feb 6 2017

  • Huge refactoring and bug-fix update (too many to mention here!).
  • Switched data everywhere to MAC-Grid (instead of central sampling).
  • Numerous improvements to Convnet model (some departure from arxiv paper, paper revision coming in February/March).
  • tfluids now has a third_party sub-library. This is essentially a port of some of Manta's simulator code (to torch + CUDA). Note that it is released under the GNU GPL V3 license (as per Manta's licensing).
  • GPU PCG (using NVidia's cusparse library) and Jacobi methods added as baseline methods.

Jan 6 2017

  • Lots of updates to model and training code.
  • Added model of Yang et al. "Data-driven projection method in fluid simulation" as a baseline comparison.
  • Changed model defaults (no more pooling, smaller network no more p loss term).
  • Improved programmability of the model from the command line.
  • Added additional scripts to plot debug data and performance vs. epoch.

Dec 21 2016

  • Refactor of data processing code.
  • Batch creation is now asynchronous and parallel (to hide file IO latency). Results in slight speed up in training for most systems, and significant speedup for disk IO limited systems (i.e. when files are on a DFS).
  • Data cache paths are now relative, so that cache data can be moved around.
  • Implemented advection in CUDA; entire simulation.lua loop is now on the GPU. Significant speedup for 3D models (both training and eval) and slight speedup for 2D models.
  • Numerous bug fixes and cleanup.

#0. Clone this repo:

git clone [email protected]:google/FluidNet.git

#1. Generating the data

CREATING VOXELIZED MODELS

We use a subset of the NTU 3D Model Database models (http://3d.csie.ntu.edu.tw/~dynamic/database/). Please download the model files:

cd FluidNet/voxelizer
mkdir objs
cd objs
wget http://3d.csie.ntu.edu.tw/~dynamic/database/NTU3D.v1_0-999.zip
# wget https://cs.nyu.edu/~schlacht/NTU3D.v1_0-999.zip  # Alternate download location.
unzip NTU3D.v1_0-999.zip
wget https://www.dropbox.com/sh/5f3t9abmzu8fbfx/AAAkzW9JkkDshyzuFV0fAIL3a/bunny.capped.obj

Next we use the binvox library (http://www.patrickmin.com/binvox/) to create voxelized representations of the NTU models. Download the executable for your platform and put the binvox executable file in FluidNet/voxelizer. Then run our script:

cd FluidNet/voxelizer
chmod u+x binvox
python generate_binvox_files.py

Note: some users have reported that they need to install lib3ds-1-3:

sudo apt-get install lib3ds-1-3

OPTIONAL: You can view the output by using the viewvox utility (http://www.patrickmin.com/viewvox/). Put the viewvox executable in the FluidNet/voxelizer/voxels directory, then:

cd FluidNet/voxelizer/voxels
chmod u+x viewvox
./viewvox -ki bunny.capped_32.binvox

BUILDING MANTAFLOW

The first step is to download the custom manta fork.

cd FluidNet/
git clone [email protected]:kristofe/manta.git

Next, you must build mantaflow using the cmake system.

cd FluidNet/manta
mkdir build
cd build
sudo apt-get install doxygen libglu1-mesa-dev mesa-common-dev qtdeclarative5-dev qml-module-qtquick-controls
cmake .. -DGUI='OFF' 
make -j8

For the above cmake command setting -DGUI='ON' will slow down simulation but you can view the flow fields. You will now have a binary called manta in the build directory.

GENERATING TRAINING DATA

Install matlabnoise (https://github.com/jonathantompson/matlabnoise) to the SAME path that FluidNet is in. i.e. the directory structure should be:

/path/to/FluidNet/
/path/to/matlabnoise/

To install matlabnoise (with python bindings):

sudo apt-get install python3.5-dev
sudo apt-get install swig
git clone [email protected]:jonathantompson/matlabnoise.git
cd matlabnoise
sh compile_python3.5_unix.sh
sudo apt-get install python3-matplotlib
python3.5 test_python.py

Now you're ready to generate the training data. Make sure the directory data/datasets/output_current exists. For the 3D training data run:

cd FluidNet/manta/build
./manta ../scenes/_trainingData.py --dim 3 --addModelGeometry True --addSphereGeometry True

For the 2D data run:

cd FluidNet/manta/build
./manta ../scenes/_trainingData.py --dim 2 --addModelGeometry True --addSphereGeometry True

#2. Training the model

RUNNING TORCH7 TRAINING

We assume that Torch7 is installed, otherwise follow the instructions here. We use CUDA 9.2 SDK and the standard distribution (pulled on Oct 13th 2019). As of 10/2019, LUA52 is broken, so now use the LUAJIT version (this is the default anyway). Lastly, there are some install notes worth reading in the Oct 2019 update above.

After install torch, compile tfluids: this is our custom CUDA & C++ library that implements a large number of the modules used in the paper:

sudo apt-get install freeglut3-dev
sudo apt-get install libxmu-dev libxi-dev
cd FluidNet/torch/tfluids
luarocks make tfluids-1-00.rockspec

Note #1: some users are reporting that you need to explicitly install findCUDA for tfluids to compile properly with CUDA 7.5 and above.

luarocks install findCUDA

Note #2: If THC.h cannot be found during compilation, make sure cutorch is installed properly (check that torch7 install.sh could find the CUDA SDK).

All training related code is in torch/ directory. To train a model on 3D data:

cd FluidNet/torch
qlua fluid_net_train.lua -gpu 1 -dataset output_current_3d_model_sphere -modelFilename myModel3D

This will pull data from the directory output_current_3d_model_sphere and dump the model to myModel3D. To train a 2D model:

cd FluidNet/torch
qlua fluid_net_train.lua -gpu 1 -dataset output_current_model_sphere -modelFilename myModel2D

At any point during the training sim you can plot test and training set loss values using the Matlab script FluidNet/torch/utils/PlotEpochs.m.

You can control any model or training config parameters from the command line. If you need to define nested variables the syntax is:

qlua fluid_net_train.lua -new_model.num_banks 2

i.e nested variables are . separated. You can print a list of possible config variables using:

qlua fluid_net_train.lua --help

Note: the first time the data is loaded from the manta output, it is cached to the torch/data/ directory. So if you need to reload new data (because you altered the dataset) then delete the cache files (torch/data/*.bin).

RUNNING THE REAL-TIME DEMO

For 2D models only! To run the interactive demo firstly compile LuaGL:

git clone [email protected]:kristofe/LuaGL.git
cd LuaGL
luarocks make luagl-1-02.rockspec

Then run the simulator:

cd FluidNet/torch
qlua -lenv fluid_net_2d_demo.lua -gpu 1 -dataset output_current_model_sphere -modelFilename myModel2D

The command line output will print a list of possible key and mouse strokes.

RUNNING THE 3D SIMULATIONS

To render the videos you will need to install Blender, but to just create the volumetric data no further tools are needed. First run our 3D example script (after training a 3D model):

cd FluidNet/torch
qlua fluid_net_3d_sim.lua -gpu 1 -loadVoxelModel none -modelFilename myModel3D

To control which scene is loaded, use the loadVoxelModel="none"|"arc"|"bunny"```. This will dump a large amount of volumetric data to the file FluidNet/blender/<mushroom_cloud|bunny|arch>_render/output_density.vbox``.

Now that the fluid simulation has run, you can render the frames in Blender. Note that rendering takes a few hours, while the 3D simulation is fast (with a lot of time spent dumping the results to disk). An implementation of a real-time 3D fluid render is outside the scope of this work. In addition, self-advection of the velocity field is currently carried out on the CPU and so is the slowest part of our simulator (a CUDA implementation is future work).

For the mushroom cloud render, open FluidNet/blender/MushroomRender.blend. Next we need to re-attach the data file (because blender caches full file paths which will now be wrong). Click on the "Smoke" object in the "Outliner" window (default top right). Click on the "Texture" icon in the "Properties" window (default bottom right), it's the one that looks like a textured Square. Scroll down to "Voxel Data" -> "Source Path:" and click the file icon. Point the file path to /path/to/FluidNet/blender/mushroom_cloud_render/density_output.vbox. Next, click either the file menu "Render" -> "Render Image", or "Render Animation". By default the render output goes to /tmp/. You can also scrub through the frame index on the time-line window (default bottom) to click a frame you want then render just that frame.

The above instructions also apply to the bunny and arch examples. Note: you might need to re-center the model depending on your version of binvox (older versions of binvox placed the voxelided model in a different location). If this is the case, then click on the "GEOM" object in the "Outliner" window. Click the eye and camera icons (so they are no longer greyed out). Then press "Shift-Z" to turn on the geometry render preview. Now that you can see the geometry and model, you can manually align the two so they overlap.

3. Limitations of the current system


While this codebase is relatively self-contained and full-featured, it is not a "ready-to-ship" fluid simluator. Rather it is a proof of concept and research platform only. If you are interested in integrating our network into an existing system feel free to reach out ([email protected]) and we will do our best to answer your questions.

RUNTIME

The entire simulation loop is not optimized; however it is fast enough for real-time applications, where good GPU resources are available (i.e. NVidia 1080 or Titan).

BOUNDARY HANDLING

Our example boundary condition code is very rudimentary. However, we support the same cell types as Manta (in-flow, empty, occupied, etc), so more complicated boundary conditions can be created. One potential limitation is that the setWallBcs codepath assumes zero velocity occupiers (like Manta does). However, it would be an easy extension to allows internal occupied voxels to have non-zero velocity.

RENDERING

We do not have a real-time 3D fluid render. We use an offline render instead. For our 2D "renderer", we simply display the RGB density field to screen and visualize the velocity vectors. It is very rudimentary. Incorporating an open-source 3D fluid render is future work.

SIMULATOR

The only external forces that are supported are vorticity confinement and buoyancy. Viscosity and gravity are not supported (but could be added easily).

UNIT TESTING

We have unit tests (including FD gradient checks) for all custom torch modules.

The two main test scripts we do have are:

cd FluidNet/
qlua lib/modules/test_ALL_MODULES.lua

and (this one requires us to generate data from manta first):

cd FluidsNet/manta/build
./manta ../scenes/_testData.py
cd ../../torch/tfluids
qlua -ltfluids -e "tfluids.test()"

You should run these first if you ever get into trouble training or running the model.

fluidnet's People

Contributors

jason-cooke avatar jonathantompson avatar mrry avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fluidnet's Issues

Failed installing dependency:

the following command ends as per title line;

on this cloud I have neither wite permissions on /glob, nor sudo rights. I am trying a workaround using --local but it still fails, any ideas?

$ luarocks make tfluids-1-00.rockspec --local
...
make: *** [install] Error 1

Error: Failed installing dependency: https://raw.githubusercontent.com/torch/rocks/master/torch-scm-1.rockspec - Build error: Failed installing.
[u23885@c009 tfluids]$ luarocks make tfluids-1-00.rockspec

Error: Your user does not have write permissions in /glob/deep-learning/versions/torch/install/lib/luarocks/rocks
-- you may want to run as a privileged user or use your local tree with --local.

too few arguments in function call

Sorry to bother you.When run this command

luarocks make tfluids-1-00.rockspec

I only get the error FluidNet/torch/tfluids/generic/tfluids.cu(1882): error: too few arguments in function call.

So I find this line in file,like this

THCudaTensor_norm(state, tensor_pdelta_norm, tensor_pdelta, 2, 1);

Is there any change in function THCudaTensor_norm ?And I'am in trouble with finding the definition of THCudaTensor_norm.

what torch should be installed and how

From the documentation:

We assume that Torch7 is installed, otherwise follow the instructions here. We use the standard distro with the cuda SDK for cutorch and cunn and cudnn.

So the above instructions take me to http://torch.ch/ which I used to install torch as per http://torch.ch/docs/getting-started.html#_

But presumably the standard distro (= https://github.com/torch/distro ) and cudnn (= https://github.com/soumith/cudnn.torch ) are additional or different components.

Not knowing anything about torch I am fully in the dark as to which components should be installed. The documentation line I pasted above in italic is quite confusing

CUDA 11 and Ubuntu 20.04.5

Hi,

Is it possible to run the source code with Ubuntu 20.04.5 and CUDA 11? I cannot install torch7 successfully with my settings.

CUDA compute capability or CUDA version requirement?

When running qlua fluid_net_train.lua -gpu 1 -dataset output_current_model_sphere -modelFilename myModel I get:

Try 'sleep --help' for more information.
sleep: invalid time interval ‘0,001’
Try 'sleep --help' for more information.
sleep: invalid time interval ‘0,001’========================>.]  319/320 
Try 'sleep --help' for more information.
sleep: invalid time interval ‘0,001’
Try 'sleep --help' for more information.
 [===========================================================>]  320/320 
sleep: invalid time interval ‘0,001’
Try 'sleep --help' for more information.
sleep: invalid time interval ‘0,001’
Try 'sleep --help' for more information.
==> Loaded 20480 samples
==> Creating model...
Number of input channels: 3
Model type: default
Bank 1:
Adding convolution: cudnn.SpatialConvolution(3 -> 16, 3x3, 1,1, 1,1)
Adding non-linearity: nn.ReLU (inplace true)
Bank 1:
Adding convolution: cudnn.SpatialConvolution(16 -> 16, 3x3, 1,1, 1,1)
Adding non-linearity: nn.ReLU (inplace true)
Bank 1:
Adding convolution: cudnn.SpatialConvolution(16 -> 16, 3x3, 1,1, 1,1)
Adding non-linearity: nn.ReLU (inplace true)
Bank 1:
Adding convolution: cudnn.SpatialConvolution(16 -> 16, 3x3, 1,1, 1,1)
Adding non-linearity: nn.ReLU (inplace true)
Adding convolution: cudnn.SpatialConvolution(16 -> 1, 1x1)
==> defining loss function
    using criterion nn.FluidCriterion: pLambda=0,00, uLambda=0,00, divLambda=1,00, borderWeight=1,0, borderWidth=3
==> Extracting model parameters
==> Defining Optimizer
    Using ADAM...
==> Profiling FPROP for 10 seconds with grid res 128
THCudaCheck FAIL file=/home/torstein/progs/FluidNet/torch/tfluids/generic/tfluids.cu line=119 error=8 : invalid device function
qlua: /home/torstein/torch/install/share/lua/5.1/tfluids/init.lua:516: cuda runtime error (8) : invalid device function at /home/torstein/progs/FluidNet/torch/tfluids/generic/tfluids.cu:119
stack traceback:
	[C]: at 0x7fdd9f648f50
	[C]: in function 'emptyDomain'
	/home/torstein/torch/install/share/lua/5.1/tfluids/init.lua:516: in function 'emptyDomain'
	fluid_net_train.lua:145: in main chunk

Using Nvidia GTX 770 with 367.57 drivers and 7.5.17 CUDA. Here's an overview over CUDA functions and required compute capability. The GPU in question has compute capability 3.0.

Here's the output from running './test.sh' in torch:
torch test.txt

Error when run " luarocks make tfluids-1-00.rockspec"

Dear Sir/Madam,
I met a error when I run " luarocks make tfluids-1-00.rockspec". I dont know what the problem is. Output didnt show the reason. I need your help. Thanks.
Driver Version: 375.88 CUDA7.5/8 gcc4.8 ubuntu14.04 cmake3.2 Tesla K40
`-- Found Torch7 in /home/roseyu/su/distro/install
-- Compiling with OpenMP support
Compiling for CUDA architecture 3.5
Compiling with OpenGL support
Compiling with CUDA support.
-- Configuring done
-- Generating done
-- Build files have been written to: /home/roseyu/FliudNet/FluidNet/torch/tfluids/build
[ 33%] Building NVCC (Device) object CMakeFiles/tfluids.dir/generic/tfluids_generated_tfluids.cu.o
/home/roseyu/FliudNet/FluidNet/torch/tfluids/generic/tfluids.cu(1882): error: too few arguments in function call

1 error detected in the compilation of "/tmp/tmpxft_00008763_00000000-7_tfluids.cpp1.ii".
CMake Error at tfluids_generated_tfluids.cu.o.cmake:266 (message):
Error generating file
/home/roseyu/FliudNet/FluidNet/torch/tfluids/build/CMakeFiles/tfluids.dir/generic/./tfluids_generated_tfluids.cu.o

make[2]: *** [CMakeFiles/tfluids.dir/generic/tfluids_generated_tfluids.cu.o] Error 1
make[1]: *** [CMakeFiles/tfluids.dir/all] Error 2
make: *** [all] Error 2

Error: Build error: Failed building.`

criterion error is NaN

When I tried to run 'fluid_net_train' I get:

qlua: lib/run_epoch.lua:221: criterion error is NaN or > 1e3.
stack traceback:
	[C]: at 0x7f551865f960
	[C]: in function 'error'
	lib/run_epoch.lua:221: in function 'opfunc'
	.../distro/install/share/lua/5.1/optim/adam.lua:38: in function 'optimMethod'
	lib/run_epoch.lua:320: in function 'runEpoch'
	fluid_net_train.lua:216: in main chunk

Looks like it is problem in training data but how can I check .bin files.
What your idea, what it could be?

Using Nvidia GTX 970 with 375.26 drivers and CUDA 8.0

NaN output when running manta

when executing following:

./manta ../scenes/_trainingData.py --dim 3 --addModelGeometry True --addSphereGeometry True

the ouptut has entries such as:

FluidSolver::solvePressure skipping CorrectVelocity since res is nan!

According to #8 a possible workaround is turn down the gradient clipping magnitude first and see if that works. By default it's 1, but I would try as low as 0.2.

Any idea how to do that? Or any suggestions how to resolve this NaN problem?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.