Git Product home page Git Product logo

Comments (6)

jbms avatar jbms commented on April 27, 2024 1

As far as I know, no one else is currently working on VR support, so that would be a great thing to work on.

I just took a quick look at the WebVR API --- I think this should be feasible, although it may require a fair bit of work. I suspect that the React VR API will not be helpful, as it is likely too high level to be usable within neuroglancer (and appears to be based on ThreeJS, which is not used by neuroglancer).

The way rendering works in neuroglancer is that there is a single WebGL canvas that covers the entire screen (this is necessary in order to share data in GPU memory between different views), and then each individual cross-sectional or 3-d rendered view is considered a "panel", each of which corresponds to a DOM element layered on top of the canvas. The overall rendering is handled in neuroglancer/display_context.ts --- each panel that uses webgl (called a RenderedPanel) is registered with the DisplayContext. When it is time to redraw, the DisplayContext loops through the panels, and for each panel sets up the WebGL viewport and then tells it to draw itself.

The 3-d rendered view is handled by PerspectivePanel in neuroglancer/perspective_view/panel.ts.

For WebVR, you could have a layout with two PerspectivePanel panels, one for each eye, and no other UI elements. You would give each PerspectivePanel a separate NavigationState corresponding to the camera position of each eye. You might need/want to add some additional options to PerspectivePanel to get additional control over the projection.

If you want to be able to switch between regular and VR display, that should be possible with the existing support in Neuroglancer for changing the set of panels dynamically.

Another thing to consider is a spherical projection view of the volumetric data as has been proposed by Moritz Helmstaedter --- that could be useful for VR display (you would presumably view the same sphere from two offset positions) as well as regular display. Because the existing cross-sectional view support supports arbitrary oblique planes, much of the infrastructure for implementing this rendering mode already exists, although it would certainly still require a fair amount of work.

Either the discussion group or this issue tracker is fine for asking questions like this.

from neuroglancer.

Kaju-Bubanja avatar Kaju-Bubanja commented on April 27, 2024

I have made some progress, but got stuck again with a problem. I managed to implement your advice and render to the Occulus using two different PerspectivePanels with offset NavigationStates. I found a good offset for some initial condition where the 3D looks really good and appears 3D but of course as soon as you start rotating the object the offset is wrong since it is fixed to the axes and the axis rotate too.
Then I tried another approach inspired by this example.
Approach 1
There the most important part is the lines 250-257.

gl.viewport(0, 0, webglCanvas.width * 0.5, webglCanvas.height);
cubeSea.render(frameData.leftProjectionMatrix, frameData.leftViewMatrix, stats);

gl.viewport(webglCanvas.width * 0.5, 0, webglCanvas.width * 0.5, webglCanvas.height);
cubeSea.render(frameData.rightProjectionMatrix, frameData.rightViewMatrix, stats);

vrDisplay.submitFrame();

I tried this in panel.ts in the draw function and managed to draw two times the same scene in 1 Panel next to each other, like this:

gl.viewport(0, 0, this.displayContext.width/2, this.displayContext.height);
this.offscreenCopyHelper.draw(this.left, renderContext,
  this.offscreenFramebuffer.colorBuffers[OffscreenTextures.COLOR].texture);
gl.viewport(this.displayContext.width/2, 0, this.displayContext.width/2, this.displayContext.height);
this.offscreenCopyHelper.draw(this.left, renderContext,
  this.offscreenFramebuffer.colorBuffers[OffscreenTextures.COLOR].texture);

Then in the OffscreenCopyHelper draw method I tried to apply the projectionMatrix and the viewMatrix like this:

if(isLeft){
      gl.uniformMatrix4fv(shader.uniform('uProjectionMatrix'), false, renderContext.frameData.leftProjectionMatrix);
      gl.uniformMatrix4fv(shader.uniform('uModelMatrix'), false, renderContext.frameData.leftViewMatrix);
    }
    else{
      gl.uniformMatrix4fv(shader.uniform('uProjectionMatrix'), false, renderContext.frameData.rightProjectionMatrix);
      gl.uniformMatrix4fv(shader.uniform('uModelMatrix'), false, renderContext.frameData.rightViewMatrix);
    }
// This was originally there
// gl.uniformMatrix4fv(shader.uniform('uProjectionMatrix'), false, identityMat4);

I assumed that the viewMatrix is the uModelMatrix and the uProjectionMatrix is the projectionMatrix, because in the WebVR example in CubeSea.js this is defined:

    "  vTexCoord = texCoord;",
    "  gl_Position = projectionMat * modelViewMat * vec4( position, 1.0 );",
    "}",

and similarly in mesh/frontend.ts this is defined:

builder.setVertexMain(`
gl_Position = uProjection * (uModelMatrix * vec4(aVertexPosition, 1.0));
vec3 normal = (uModelMatrix * vec4(aVertexNormal, 0.0)).xyz;
float lightingFactor = abs(dot(normal, uLightDirection.xyz)) + uLightDirection.w;
vColor = vec4(lightingFactor * uColor.rgb, uColor.a);
`);

I expected this to work and correctly apply the matrices, but instead I just get a black panel. I assume that the transformation transforms the actual coordinates somewhere outside the field of view. Now my question would be: is my assumption right that these are the two same uniforms in the WebVR example and in Neuroglancer? Does ViewMatrix correspond to uModelMatrix and ProjectionMatrix to uProjection? And if yes any idea why I just get a blackscreen? Should I apply these transformation in another place?

Alternately what I thought should work is changing the rotation feature so that it always rotates the object around it's own center and not the whole coordinate system, although this still leaves the problem of how to apply the ViewMatrix and ProjectionMatrix.

Basically my question is, can I really just set the gl.viewport somewhere in neuroglancer then render everything apply the view and projection matrix that I get from the VR device and then repeat the same process with an offset viewport? Is there something preventing this in neuroglancer or am I just not able to find the right place where to do this?

Approach 2
I found a better way to do what I wanted. I realized that the viewOffset in panel.draw() method is the camera position and moved the left camera slightly to the left and the right to the right. This looks a bit better. Now my idea was to use the mat4 in updateProjectionMatrix in panel.ts. In there I would read the pose of the device and apply a translation/rotation to the mat4.
I tried this but got weird very fast flickering effects. Looked like the frames where not rendered fast enough, so that you could notice the clearing in between frames? Any idea how fast neuroglancer renders frames? Or is there a metric that tells this? Does this second approach make sense?

from neuroglancer.

jbms avatar jbms commented on April 27, 2024

I'm not sure exactly what the appropriate formula is for computing the left vs right projection matrices for good stereo results, but I'm sure it is documented somewhere/found in examples somewhere. As you found, the offset can't be a fixed xyz vector in data coordinates, because it needs to be in the local rotation frame. Additionally, the offset distance shouldn't be in absolute spatial coordinates (which correspond to nanometers), because I think you want the offset to be a fixed amount after zooming is applied.

As far as approach 1, you don't want to do anything with OffscreenCopyHelper, because that is just used to copy an already-rendered image of the scene from a texture to the canvas framebuffer.

I think the best approach would be to add an additional parameter to PerspectivePanel that specifies an additional offset or other adjustment to apply to the projection matrix in order to achieve the desired stereo effect.

As far as the flickering, did that occur only with approach 2 but not with your original attempt at modifying the navigation state? Neuroglancer doesn't have a fixed frame rate; it only re-renders when something has changed, although the maximum frame rate is limited by the Javascript requestAnimationFrame facility. It doesn't currently keep track of the frame rate, but you could add something to the update method of DisplayContext in display_context.ts to keep track of it. My understanding is that the WebVR submitFrame call should control when frames are actually submitted, and should avoid flickering. Perhaps that is not being called at the right time?

from neuroglancer.

jbms avatar jbms commented on April 27, 2024

Another possibility regarding the flickering, is the HTML canvas element used for rendering to the VR display supposed to be added to the HTML document's DOM, or should it just be left unparented?

from neuroglancer.

Kaju-Bubanja avatar Kaju-Bubanja commented on April 27, 2024

I implemented your suggestions and changed the way I update the projection matrix. Before I tried to take the linear and angular velocities from the device and calculate the future positions of the cameras. But now I took the position and orientation as input and this is fast enough. This works well and looking around gives you the usual VR feeling(3D, rotation, translation etc.). Here is the main part I added to updateProjectionMatrix:

let viewOffset: vec3 = vec3.fromValues(0,0,200);
this.displayContext.vrDisplay.getFrameData(this.displayContext.frameData);
let orientation = this.displayContext.returnQuatorZeroQuat(this.displayContext.frameData.pose.orientation);
let orientationMat: mat4 = mat4.create(); 
mat4.fromQuat(orientationMat, orientation);
let position = this.displayContext.returnVec3OrZeroVec3(this.displayContext.frameData.pose.position);

vec3.add(viewOffset, viewOffset, position);
// first approach to apply the quaternion which did not work.
// vec3.transformQuat(viewOffset, viewOffset, quat.invert(orientation, orientation));
mat4.translate(modelViewMat, modelViewMat, viewOffset);
mat4.multiply(modelViewMat, modelViewMat, orientationMat);
// Need to make this depend on zoom level.
if(isLeft){
  vec3.set(viewOffset, -15, 0, 0);
}else{
  vec3.set(viewOffset, 15, 0, 0);
}
mat4.translate(modelViewMat, modelViewMat, viewOffset);

Just when rotating around the axis of the cameras(so tilting your head sideways) slight elongation effects happen, I will look into that. There is no flickering in the headset. The flickering happens only in the mirrored image on the PC screen. It might be due to the following specification, but it is currently a minor problem, I will look at it at a later time:

The source attribute defines the canvas whose contents will be presented by the VRDisplay when VRDisplay.submitFrame() is called. Upon being passed into requestPresent() the current backbuffer of the source’s context MAY be lost, even if the context was created with the preserveDrawingBuffer context creation attribute set to true.

Thank you for your help.

from neuroglancer.

jbms avatar jbms commented on April 27, 2024

Great that you got it (mostly) working. If you are able to get it into a state suitable for merging, I'm sure other people would find it quite useful.

from neuroglancer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.