Git Product home page Git Product logo

Comments (2)

SeanCurtis-TRI avatar SeanCurtis-TRI commented on July 18, 2024

Some of the camera intrinsics have some non-obvious mappings in Blender.

Non-uniform focal lengths

In blender, we create the effect of non-uniform focal lengths using two mechanisms:

  • Defining a "baseline" focal length.
    • Blender's projection matrix doesn't allow for anisotropy. So, we configure the camera to have a single symmetric focal length.
  • Using pixel aspect ratio to introduce anisotropy.

Setting the baseline focal length

Currently, we explicitly set the vertical field of view. We converged onto this via a bit of trial and error (as documented in this review conversation. Setting camera.data.angle_x = params.fov_x did not work.

Further blender investigation suggests that there are three other parameters that can contribute to this topic:

The fields of view, focal lengths, and image dimensions are all interconnected in Drake's camera parameters. In Blender, there are both the image dimensions (rendering setting), but there are also sensor dimensions (the so-called sensor_width and sensor_height indicated above). If the sensor aspect ratio doesn't match the camera's intrinsics, simple operations can lead to surprising outcomes.

The test we currently have works for a square image. However prodding around inside of blender shows that the value of sensor_fit (combined with the ratio of the sensor dimensions) can significantly impact the final rendering, playing havoc with the base line focal length. We don't actively do anything with the sensor dimensions, but we should.

Options:

  • We can assume that the camera created by the gltf import is going to have auto and sensor dimensions defined in such a way that we can always safely set angle_y.
  • We can explicitly set the sensor size "appropriately". I'm not quite sure what the "appropriate" value is -- although I have my suspicions. I suspect if sensor aspect matches image aspect ratio we're good. But I don't understand yet how the sensor dimensions feed into anything else (possibly nothing else).
    • I don't think we can programmatically set the fit (I hvaen't figured out where to find byp enumeration values).

Adding anisotropy

The current pixel aspect ratio logic, while a bit counter-intuitive, seems robust and correct (assuming the baseline field of view is correct).


Blender fun and games:

I defined the following two functions in a blender console sessions:

def cam(w, h, fov_x = None, fov_y = None):
    c.sensor_width = w
    c.sensor_height = h
    if fov_x is not None:
        c.angle_x = fov_x
    elif fov_y is not None:
        c.angle_y = fov_y

and

def cam_fix(w, h, fov_x = None, fov_y = None):
    c.sensor_height = h
    c.sensor_width = w
    if fov_x is not None:
        c.sensor_fit = 'HORIZONTAL'
        c.angle_x = fov_x
    elif fov_y is not None:
        c.sensor_fit = 'VERTICAL'
        c.angle_y = fov_y

For a square image output aspect ratio, I executed the following (with the indicated results):

  c = bpy.data.objects.get("Camera").data
  cam(32, 32, pi / 2)       # Expected scene framing.
  cam(32, 32, None, pi / 2) # Expected scene framing.
  cam(32, 24, pi / 2)       # Expected scene framing.
  cam(32, 24, None, pi / 2) # Focal length decreased; scene appears to draw away.
  cam(24, 32, pi / 2)       # Expected scene framing.
  cam(24, 32, None, pi / 2) # Focal length increased; scene appears to draw closer.

  cam_fix(32, 32, pi / 2)       # Expected scene framing.
  cam_fix(32, 32, None, pi / 2) # Expected scene framing.
  cam_fix(32, 24, pi / 2)       # Expected scene framing.
  cam_fix(32, 24, None, pi / 2) # Expected scene framing.
  cam_fix(24, 32, pi / 2)       # Expected scene framing.
  cam_fix(24, 32, None, pi / 2) # Expected scene framing.

The result is different if the output image is rectangular. Simply declaring the sensor_fit is insufficient. Further investigation required.

But as far as this issue goes, we should make sure we test output images with various aspect ratios as well as anisotropic aspect ratios.

from drake-blender.

SeanCurtis-TRI avatar SeanCurtis-TRI commented on July 18, 2024

We also need to investigate that sensor aspect ratio output in our gltf files (e.g., test code). Where does the aspect ratio come from and does that inform the sensor dimensions in blender? Should we be doing something explicit about that on the drake side?

from drake-blender.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.