Git Product home page Git Product logo

pytorch-blender's Introduction

Build Status

blendtorch

blendtorch is a Python framework to seamlessly integrate Blender into PyTorch for deep learning from artificial visual data. We utilize Eevee, a new physically based real-time renderer, to synthesize images and annotations in real-time and thus avoid stalling model training in many cases.

If you find the project helpful, you consider citing it.

Feature summary

  • Data Generation: Stream distributed Blender renderings directly into PyTorch data pipelines in real-time for supervised learning and domain randomization applications. Supports arbitrary pickle-able objects to be send alongside images/videos. Built-in recording capability to replay data without Blender. Bi-directional communication channels allow Blender simulations to adapt during network training.
    More info [examples/datagen], [examples/compositor_normals_depth], [examples/densityopt]
  • OpenAI Gym Support: Create and run remotely controlled Blender gyms to train reinforcement agents. Blender serves as simulation, visualization, and interactive live manipulation environment.
    More info [examples/control]

The figure below visualizes the basic concept of blendtorch used in the context of generating artificial training data for a real-world detection task.


Fig 1: With Blendtorch, you are able to train your PyTorch modules on massively randomized artificial data generated by Blender simulations.

Getting started

  1. Read the installation instructions below
  2. To get started with blendtorch for training data training read [examples/datagen].
  3. To learn about using blendtorch for creating reinforcement training environments read [examples/control].

Prerequisites

This package has been tested with

  • Blender >= 2.83/2.91/3.0/3.1 (Python >= 3.7)
  • PyTorch >= 1.5/1.10 (Python >= 3.7)

running Windows 10 and Linux. Other versions might work as well, but have not been tested.

Installation

blendtorch is composed of two distinct sub-packages:

providing the PyTorch and Blender views on blendtorch. bendtorch.btt will be installed to your local Python environment, while blendtorch.btb will be installed to the Python environment that ships with Blender.

  1. Clone this repository

    git clone https://github.com/cheind/pytorch-blender.git <DST>
    
  2. Extend PATH

    Ensure Blender executable is in your environments lookup PATH. On Windows this can be accomplished by

    set PATH=c:\Program Files\Blender Foundation\Blender 2.91;%PATH%
    

    On Ubuntu when blender is installed using snap, the path may be included by adding the following line to your ~/.bashrc,

    export PATH=/snap/blender/current/${PATH:+:${PATH}}
    
  3. Complete Blender settings

    Open Blender at least once, and complete the initial settings. If this step is missed, some of the tests (especially the tests relating RL) will fail (Blender 2.91).

  4. Install blendtorch.btb

    Run

    blender --background --python <DST>/scripts/install_btb.py
    

    to blendtorch-btb into the Python environment bundled with Blender.

  5. Install blendtorch.btt

    Run

    pip install -e <DST>/pkg_pytorch
    

    installs blendtorch-btt into the Python environment that you intend to run PyTorch from.

  6. Install gym [optional]

    While not required, it is advised to install OpenAI gym if you intend to use blendtorch for reinforcement learning

    pip install gym
    
  7. Install dev requirements [optional]

    This step is optional. If you plan to run the unit tests

    pip install -r requirements_dev.txt
    pytest tests/
    

Troubleshooting

Run

blender --version

and check if the correct Blender version (>=2.83) is written to console. Next, ensure that blendtorch-btb installed correctly

blender --background --python-use-system-env --python-expr "import blendtorch.btb as btb; print(btb.__version__)"

which should print blendtorch version number on success. Next, ensure that blendtorch-btt installed correctly

python -c "import blendtorch.btt as btt; print(btt.__version__)"

which should print blendtorch version number on success.

Architecture

Please see [examples/datagen] and [examples/control] for an in-depth architectural discussion. Bi-directional communication is explained in [examples/densityopt].

Runtimes

The following tables show the mean runtimes per batch (8) and per image for a simple Cube scene (640x480xRGBA). See benchmarks/benchmark.py for details. The timings include rendering, transfer, decoding and batch collating. Reported timings are for Blender 2.8. Blender 2.9 performs equally well on this scene, but is usually faster for more complex renderings.

Blender Instances Runtime sec/batch Runtime sec/image Arguments
1 0.236 0.030 UI refresh
2 0.14 0.018 UI refresh
4 0.099 0.012 UI refresh
5 0.085 0.011 no UI refresh

Note: If no image transfer is needed, i.e in reinforcement learning of physical simulations, 2000Hz are easily achieved.

Cite

The code accompanies our academic work [1],[2] in the field of machine learning from artificial images. Please consider the following publications when citing blendtorch

@inproceedings{blendtorch_icpr2020_cheind,
    author = {Christoph Heindl, Lukas Brunner, Sebastian Zambal and Josef Scharinger},
    title = {BlendTorch: A Real-Time, Adaptive Domain Randomization Library},
    booktitle = {
        1st Workshop on Industrial Machine Learning 
        at International Conference on Pattern Recognition (ICPR2020)
    },
    year = {2020},
}

@inproceedings{robotpose_etfa2019_cheind,
    author={Christoph Heindl, Sebastian Zambal, Josef Scharinger},
    title={Learning to Predict Robot Keypoints Using Artificially Generated Images},
    booktitle={
        24th IEEE International Conference on 
        Emerging Technologies and Factory Automation (ETFA)
    },    
    year={2019}
}

Caveats

  • Despite offscreen rendering is supported in Blender 2.8x it requires a UI frontend and thus cannot run in --background mode. If your application does not require offscreen renderings you may enable background usage (see tests/ for examples).
  • The renderings produced by Blender are by default in linear color space and thus will appear darker than expected when displayed.

pytorch-blender's People

Contributors

cheind avatar gauenk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-blender's Issues

Enhancment for background support

"Despite offscreen rendering is supported in Blender 2.8x it requires a UI frontend and thus cannot run in --background mode."

Until official support by blender, this could be worked-around for EEVEE by configuring a fake monitor.
This is what I used for my head-less linux setup See answer here.

Error in example/controls/cartpole.py

The following error occurs when running the example/controls/cartpole.py:

Read blend: /home/gauenk/Documents/packages/pytorch-blender/examples/control/cartpole_gym/envs/cartpole.blend
Traceback (most recent call last):
  File "./cartpole.py", line 39, in <module>
    main()
  File "./cartpole.py", line 30, in main
    obs = env.reset()        
  File "/home/gauenk/.local/lib/python3.8/site-packages/gym/wrappers/order_enforcing.py", line 16, in reset
    return self.env.reset(**kwargs)
  File "/home/gauenk/Documents/packages/pytorch-blender/pkg_pytorch/blendtorch/btt/env.py", line 292, in reset
    obs, info = self._env.reset()
  File "/home/gauenk/Documents/packages/pytorch-blender/pkg_pytorch/blendtorch/btt/env.py", line 64, in reset
    obs = ddict.pop('obs')
KeyError: 'obs'

This happens because in btt/env.py (pytorch's env.py) the reset function pops off a key "obs" for the observation. However, an observation is not sent from the blender side, btb/env.py in the _pre_animation function. The dict sent from the blender side is defined as follows,

self.ctx = {'prev_action': None, 'done': False}

Issue with install / load

Everything seemed to install ok, but when trying to test in python3 the module is missing. Any ideas?

pip3 list -l
Package           Version  Location
----------------- -------- --------------------------------------------------------
blendtorch-btb    0.4.0    /home/me/Prog/gfx/pytorch-blender/pkg_blender
blendtorch-btt    0.4.0    /home/me/Prog/gfx/pytorch-blender/pkg_pytorch
cycler            0.10.0
kiwisolver        1.3.1
matplotlib        3.4.1
minexr            1.0.1
numpy             1.20.2
Pillow            8.2.0
pip               21.1.1
PyOpenGL          3.1.5
pyparsing         2.4.7
python-dateutil   2.8.1
pyzmq             22.0.3
setuptools        49.2.1
six               1.15.0
supershape        1.1.1
torch             1.8.1
typing-extensions 3.10.0.0
me@cuddles:~/Prog/gfx/pytorch-blender$ python3
Python 3.6.9 (default, Nov  7 2019, 10:44:02) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import blendtorch.btt as btt
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'blendtorch'

collision between the cloth and rigid body not working

Hi, thanks for sharing the very interesting work!

I am following the control example for reinforcement learning. I created my own blend file, which includes the collision between the cloth and rigid body. It works fine if I open it in Blender directly. But with pytorch-blender, the collision is not working.
I am wondering if you have any advice on that.

Thanks in advance

Is there any video where I can learn about the capabilities of this

I'd like to learn about it easily like maybe with an example video I have a blender add on that bundles torch already and it would be awesome to accentuate it with something like this but I just need like a kind of for dummies approach so I can approach this and understand how to work with it...

needs adaptation to 2.8?

On the develop branch, after changing path to call /blender2.8/blender.py in example.py and python example.py --blend-path ~/Desktop/blender282/ I got an error:

Traceback (most recent call last):
  File "example.py", line 64, in <module>
    main()
  File "example.py", line 43, in main
    with bt.BlenderLauncher(num_instances=4, script='blender28/blender.py', scene='blender28/scene.blend', blend_path=args.blend_path) as bl:        
  File "/home/pm/Desktop/APPS/pytorch-blender/blendtorch/torch/launcher.py", line 52, in __init__
    self.blender_info = discover_blender(self.blend_path)
  File "/home/pm/Desktop/APPS/pytorch-blender/blendtorch/torch/finder.py", line 56, in discover_blender
    with tempfile.TemporaryFile(mode='w', delete=False) as fp:
TypeError: TemporaryFile() got an unexpected keyword argument 'delete'

I'm sure you're working on the 2.8x version so feel free to close it as you fix the script.

Neural Style Transfer that is UV aware?

can we take each triangle of a mesh - bake it to a image each + it's adjacent triangles and have a 'hintmap' rendered too it -> then style transfer vs this image -> transfer it and bake to a uv image in blender

I have had good results without it being seamless - but this seems like it could enable you to transfer image styles onto 3d models.
https://www.youtube.com/watch?v=bhVAXoPBheE

TypeError: bpy_struct: item.attr = val: RigidBodyConstraint.motor_lin_target_velocity expected a float type, not numpy.ndarray

I just installed the project with the latest versions of pytorch (1.10.1), gym (0.21.0), and Blender (3.0.0).

I'm getting the following error when trying to run the cartpole example:

Traceback (most recent call last):
  File "/Users/rafa/Documents/Dev/pytorch-blender/pkg_blender/blendtorch/btb/animation.py", line 209, in _on_pre_frame
    self.pre_frame.invoke()
  File "/Users/rafa/Documents/Dev/pytorch-blender/pkg_blender/blendtorch/btb/signal.py", line 54, in invoke
    s(*args, **kwargs)
  File "/Users/rafa/Documents/Dev/pytorch-blender/pkg_blender/blendtorch/btb/env.py", line 108, in _pre_frame
    self._env_prepare_step(action)
  File "/Users/rafa/Documents/Dev/pytorch-blender/examples/control/cartpole_gym/envs/cartpole.blend.py", line 33, in _env_prepare_step
    self._apply_motor_force(action)
  File "/Users/rafa/Documents/Dev/pytorch-blender/examples/control/cartpole_gym/envs/cartpole.blend.py", line 55, in _apply_motor_force
    self.motor.motor_lin_target_velocity = self.motor.motor_lin_target_velocity + \
TypeError: bpy_struct: item.attr = val: RigidBodyConstraint.motor_lin_target_velocity expected a float type, not numpy.ndarray

It seems that the value of f being passed to def _apply_motor_force(self, f): is a numpy.ndarray of 3 values.

Is there an easy way to fix this or it's a deeper issue due to changes in the dependencies?

Unittest errors due to short DEFAULT_TIMEOUTMS

Hi,

I was running your library on my laptop which I had for a couple of years but I would get an error related to timeout.
When I changed the DEFAULT_TIMEOUTMS constant to 60000 (60 secs) it fixed the problem. Maybe adding that to the installation guide would help others with low end systems.

Thanks for the library.

datagen other complex scenes (than cubes)

Hi, thanks for this amazing package !
I was wondering how to go about making diverse scenes other than cubes. Is there anywhere I could download these?
Specially, say I download some .blend files off the internet, how to make suitable .py files for their integration?
Also, how does background diversification come into play?

Thanks !
Guy

Debugging cartpole.blend.py in VScode

Hi

I have a question about debugging issue I met with cartpole example.

I use docker, and IDE is a VScode.

I try to debug the function inside a "cartpole.blend.py", but breakpoints are not working.

What I've figured out so far is that "cartpole.blend.py" is executed as command by subprocess.Popen(cmd,..).

I am a beginner, so It will be a silly question though.

Is there any way to debug the remote code which executed by command?
Could you tell me which is your IDE for this project?

Guided DR

Provide an example to illustrates guided domain randomization. For example use stoch. computational graphs to optimize parameters of a 3D mesh generating function like these supershapes3d

import bpy
import math
 
# mesh arrays
verts = []
faces = []
edges = []
 
#3D supershape parameters
m = 1.23
a = -0.06
b = 2.78
n1 = 0.5
n2 = -.48
n3 = 1.5
 
scale = 3
 
Unum = 50
Vnum = 50
 
Uinc = math.pi / (Unum/2)
Vinc = (math.pi/2)/(Vnum/2)
 
#fill verts array
theta = -math.pi
for i in range (0, Unum + 1):
    phi = -math.pi/2
    r1 = 1/(((abs(math.cos(m*theta/4)/a))**n2+(abs(math.sin(m*theta/4)/b))**n3)**n1)
    for j in range(0,Vnum + 1):
        r2 = 1/(((abs(math.cos(m*phi/4)/a))**n2+(abs(math.sin(m*phi/4)/b))**n3)**n1)
        x = scale * (r1 * math.cos(theta) * r2 * math.cos(phi))
        y = scale * (r1 * math.sin(theta) * r2 * math.cos(phi))
        z = scale * (r2 * math.sin(phi))
 
        vert = (x,y,z) 
        verts.append(vert)
        #increment phi
        phi = phi + Vinc
    #increment theta
    theta = theta + Uinc
 
#fill faces array
count = 0
for i in range (0, (Vnum + 1) *(Unum)):
    if count < Vnum:
        A = i
        B = i+1
        C = (i+(Vnum+1))+1
        D = (i+(Vnum+1))
 
        face = (A,B,C,D)
        faces.append(face)
 
        count = count + 1
    else:
        count = 0
 
#create mesh and object
mymesh = bpy.data.meshes.new("supershape")
myobject = bpy.data.objects.new("supershape",mymesh)
 
#set mesh location
myobject.location = bpy.context.scene.cursor.location
bpy.context.collection.objects.link(myobject)
 
#create mesh from python data
mymesh.from_pydata(verts,edges,faces)
mymesh.update(calc_edges=True)
 
#set the object to edit mode
bpy.context.view_layer.objects.active = myobject
bpy.ops.object.mode_set(mode='EDIT')
 
# remove duplicate vertices
bpy.ops.mesh.remove_doubles() 
 
# recalculate normals
bpy.ops.mesh.normals_make_consistent(inside=False)
bpy.ops.object.mode_set(mode='OBJECT')
 
# subdivide modifier
myobject.modifiers.new("subd", type='SUBSURF')
myobject.modifiers['subd'].levels = 3
 
# show mesh as smooth
mypolys = mymesh.polygons
for p in mypolys:
    p.use_smooth = True

adapted from

http://wiki.theprovingground.org/blender-py-supershape
http://paulbourke.net/geometry/supershape/

How to access other render passes?

Currently online combined render pass is available through OffscreenRenderer. Do we need to re-render or can we directly access the intermediate passes?

Installing blendtorch.btb fails

Set up fails at blender --background --python <DST>/scripts/install_btb.py

Below is the terminal log

Blender 3.4.1 (hash 55485cb379f7 built 2022-12-20 01:51:19)
Read prefs: C:\Users\USERNAME\AppData\Roaming\Blender Foundation\Blender\3.4\config\userpref.blend
Installing Blender dependencies. This might take a while...
b'Looking in links: c:\\Users\\USERNAME\\AppData\\Local\\Temp\\tmpxxec3_tz\r\nRequirement already satisfied: setuptools in c:\\users\\USERNAME\\appdata\\roaming\\python\\python310\\site-packages (68.0.0)\r\nRequirement already satisfied: pip in c:\\users\\USERNAME\\appdata\\roaming\\python\\python310\\site-packages (23.1.2)\r\n'
b'Requirement already satisfied: pip in c:\\users\\USERNAME\\appdata\\roaming\\python\\python310\\site-packages (23.1.2)\r\n'
['C:\\Program Files\\Blender Foundation\\Blender 3.4\\3.4\\python\\bin\\python.exe', '-m', 'pip', 'install', '--upgrade', '--user', '-e', 'C:\\Users\\USERNAME\\Documents\\BlendTorch\\scripts\\..\\pkg_blender']
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [60 lines of output]
      C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\dist.py:510: SetuptoolsDeprecationWarning: Invalid version: '"0.4.0"'.
      !!

              ********************************************************************************
              The version specified is not a valid version according to PEP 440.
              This may not work as expected with newer versions of
              setuptools, pip, and PyPI.

              By 2023-Sep-26, you need to update your project and remove deprecated calls
              or your builds will no longer be supported.

              See https://peps.python.org/pep-0440/ for details.
              ********************************************************************************

      !!
        self._validate_version(self.metadata.version)
      ['pyzmq>=18.1.1', 'numpy>=1.18.2', 'pyopengl>=3.1.5', 'minexr>=1.0.0', 'supershape>=1.1.0']
      running egg_info
      C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\command\\egg_info.py:131: SetuptoolsDeprecationWarning: Invalid version: '"0.4.0"'.
      !!

              ********************************************************************************
              Version '"0.4.0"' is not valid according to PEP 440.

              Please make sure to specify a valid version for your package.
              Also note that future releases of setuptools may halt the build process
              if an invalid version is given.

              By 2023-Sep-26, you need to update your project and remove deprecated calls
              or your builds will no longer be supported.

              See https://peps.python.org/pep-0440/ for details.
              ********************************************************************************

      !!
        return _normalization.best_effort_version(tagged)
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\\Users\\USERNAME\\Documents\\BlendTorch\\pkg_blender\\setup.py", line 13, in <module>
          setup(
        File "C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\__init__.py", line 107, in setup
          return distutils.core.setup(**attrs)
        File "C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\core.py", line 185, in setup
          return run_commands(dist)
        File "C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\core.py", line 201, in run_commands
          dist.run_commands()
        File "C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\dist.py", line 969, in run_commands
          self.run_command(cmd)
        File "C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\dist.py", line 1234, in run_command
          super().run_command(command)
        File "C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\dist.py", line 987, in run_command
          cmd_obj.ensure_finalized()
        File "C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\cmd.py", line 111, in ensure_finalized
          self.finalize_options()
        File "C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\command\\egg_info.py", line 218, in finalize_options
          parsed_version = packaging.version.Version(self.egg_version)
        File "C:\\Users\\USERNAME\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_vendor\\packaging\\version.py", line 198, in __init__
          raise InvalidVersion(f"Invalid version: '{version}'")
      setuptools.extern.packaging.version.InvalidVersion: Invalid version: '-0.4.0-'
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
b"Obtaining file:///C:/Users/USERNAME/Documents/BlendTorch/pkg_blender\r\n  Preparing metadata (setup.py): started\r\n  Preparing metadata (setup.py): finished with status 'error'\r\n"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.