Git Product home page Git Product logo

webxr-polyfill's Introduction

(deprecated, experimental) WebXR polyfill with examples

The API for "WebXR" implemented in this repository is based on a proposed draft proposal for WebXR we created as a starting point for discussing WebXR in the fall of 2017, to explore what it might mean to expand WebVR to include AR/MR capabilities.

We initially created this polyfill when the community group was calling the specification "WebVR", so using "WebXR" was not confusing. Now that the community group is working towards changing the name of the spec, this repo may be confusing to newcomers.

We're working to bring this repo's master branch in line with the community group's draft spec. But that work is not yet complete.

The WebVR community has shifted WebVR in this direction. The group is now called the Immersive Web Community Group and the WebVR specification has now become the WebXR Device API. You should consider that spec as ground-truth for WebXR, and it is what you will likely see appearing in browsers through the rest of 2018 and into 2019.

We will continue to experiment with extensions to, and new ideas for, WebXR in this library. Soon, we expect it to be integrated directly in our WebXR Viewer iOS app and no longer be included directly in any web pages.

WebXR library with examples

This repository holds an implementation of a non-compliant version of WebXR, along with sample code demonstrating how to use the API.

WARNING

THIS SOFTWARE IS NON-STANDARD AND PRERELEASE, IS NOT READY FOR PRODUCTION USE, AND WILL SOON HAVE BREAKING CHANGES.

NOTHING IN THIS REPO COMES WITH ANY WARRENTY WHATSOEVER. DO NOT USE IT FOR ANYTHING EXCEPT EXPERIMENTS.

There may be pieces of the library that are stubbed out and throw 'Not implemented' when called.

Running the examples

The master branch of this repo is automatically built and hosted at https://examples.webxrexperiments.com

The develop branch is hosted at https://develop.examples.webxrexperiments.com

Building and Running the examples

Clone this repo and then change directories into webxr-polyfill/

Install npm and then run the following:

npm install   # downloads webpack and an http server
npm start     # builds the polyfill in dist/webxr-polyfill.js and start the http server in the current directory

Using one of the supported browsers listed below, go to http://YOUR_HOST_NAME:8080/

Portable builds

To build the WebXR polyfill into a single file that you can use in a different codebase:

npm run build

The resulting file will be in dist/webxr-polyfill.js

Writing your own XR apps

The WebXR polyfill is not dependent on any external libraries, but examples/common.js has a handy base class, XRExampleBase, that wraps all of the boilerplate of starting a WebXR session and rendering into a WebGL layer using Three.js.

Look in examples/ar_simplest/index.html for an example of how to extend XRExampleBase and how to start up an app.

If you run these apps on Mozilla's ARKit based iOS app then they will use the class in polyfill/platform/ARKitWrapper.js to get pose and anchor data out of ARKit.

If you run these apps on Google's old ARCore backed experimental browser then they will use the class in polyfill/platform/ARCoreCameraRenderer.js to use data out of ARCore.

If you run these apps on desktop Firefox or Chrome with a WebVR 1.1 supported VR headset, the headset will be exposed as a WebXR XRDisplay.

If you run these apps on a device with no VR or AR tracking, the apps will use the 3dof orientation provided by Javascript orientation events.

Supported Displays

  • Flat Display (AR only, needs VR)
  • WebVR 1.1 HMD (VR only, needs AR)
  • Cardboard (NOT YET)
  • Hololens (NOT YET)

Supported Realities

  • Camera Reality (ARKit on Mozilla iOS Test App, WebARonARCore on Android, WebARonARKit on iOS, WebRTC video stream (PARTIAL))
  • Virtual Reality (Desktop Firefox with Vive and Rift, Daydream (NOT TESTED), GearVR (Not Tested), Edge with MS MR headsets (NOT TESTED))
  • Passthrough Reality (NOT YET)

Supported Browsers

webxr-polyfill's People

Contributors

andyps avatar arturitu avatar blairmacintyre avatar che1404 avatar dmarcos avatar joshmarinacci avatar machenmusik avatar mozilla-github-standards avatar rspace avatar trevorfsmith avatar vincentfretin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

webxr-polyfill's Issues

need a method to remove/destroy anchors

Javascript programmers should be able to ask to destroy any anchor. For anchors they created, this should work. For system-created anchors, this may or may not work (system dependent.)

Add VR support for FlatDisplay

The FlatDisplay currently only supports augmentation sessions, and rejects sessions that request a virtual reality.

Enhance the FlatDisplay on supported browsers to accept VR sessions and show a virtual Reality instead of a camera view.

Stub out the API for getting access to camera data for computer vision

Right now, there is no way to request the Reality camera data in order to do computer vision tasks like marker detection.

Stub out an API on Realities to request access to the camera data.
Stub out an API on Realities so that CV libs that detect markers can integrate them as XRAnchors.

Eventually the UA should display the camera data without giving the JS app access and only when the app requests direct access to the camera data will it trigger the security prompt and check. For now, UAs that are giving the camera data via the WebRTC media streams use that security prompt and check.

Can I use OrbitControls or TrackballControls?

I tryed to import controls using :
var orbitControls = new THREE.OrbitControls( this.camera,this.renderer.domElement);
so that I can rotate,zoom or pan on the THREE models like the "Teaport", but it doesn't work.
Is there any solutions?

Argon browser support, Vuforia image/model tracking & XRAnchor States

I wanted to let you know that I am working on adding support for the Argon Browser to the polyfill, and plan to expose a way to do marker tracking with Vuforia. I'm not sure yet what that should look like, but please suggest anything if you have ideas.

My thoughts are that there can be an extension API similar to webgl-extensions (https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Using_Extensions), perhaps by exposing a similar "getExtension" API in the Reality class:

const vuforia = session.reality.getExtension('ARGON_Vuforia');

vuforia.init({}); // license data

vuforia.objectTracker.createDataset("dataset.xml").then((dataSet)=>{
    vuforia.objectTracker.activateDataset(dataSet);
    // etc. 
})

Anyway, after loading and activating a Vuforia dataset, the trackables contained in that dataset would have to be made available somehow. Trackables can have a known or unknown pose, however right now it doesn't seem to be the case that XRAnchors can have an "unknown" pose state, so I'm not sure what the best way to make them available as XRAnchors would be. Would it be okay to extend the XRAnchor to have various states, so that applications can hold to a single Anchor reference as it gains and loses tracking?

Raycasting is incorrect in mobile AR

I posted this issue as a question on Stack Overflow as I was unsure of whether it was a bug in this polyfill, three.xr.js or aframe-xr:

https://stackoverflow.com/questions/49009873/why-is-raycast-direction-calculated-incorrectly-in-webxr

However, the recent pointer with SITTING_EYE_HEIGHT and the fact that changing this constant to 0 goes some way to fix the raycasting issue has led me to believe that this is an issue that should be fixed in this polyfill.

What is the rationale for having a fixed eye height, as was introduced (or refactored) in this commit: 2a6b4e0

Why would the raycast direction still be off on the x-axis when setting SITTING_EYE_HEIGHT to 0?

issue/feature request - audio

from my point of sight, it would be interesting for you to:


furthermore:


Two things shall be considered in first place:

  • supercollider, puredata, csound and pwgl, are state of the art opensource environments for sound synthesis, algorhtmic composition, etc.
  • that means they are not ambisonics implementations

by trying to replicate wave field synthesis in stereo and implementing it throughout webcl, for instance:

  • you would be able to do things that you are not capable of doing with standard ambisonics, as wfs is an higer order implementation of ambisonics, so more interpolation, etc. and having a framework for wave field synthesis, would allow you to do: vr with real large scale speaker arrays
  • i think trying at least to learn about wave field wouldn't be a bad idea: https://en.wikipedia.org/wiki/Wave_field_synthesis
  • Its mathematical basis is the Kirchhoff-Helmholtz integral

from my standing point, an informed reading of what these environment really are would be pertinent, so, here's some propedeutic web bibliography:


Supercollider

SuperCollider is an environment and programming language originally released in 1996 by James McCartney for real-time audio synthesis and algorithmic composition.

Since then it has been evolving into a system used and further developed by both scientists and artists working with sound. It is an efficient and expressive dynamic programming language providing a framework for acoustic research, algorithmic music, interactive programming and live coding.

Released under the terms of the GPLv2 in 2002, SuperCollider is free and open-source software.


CSound

Csound is a computer programming language for sound, also known as a sound compiler or an audio programming language, or more precisely, an audio DSL. It is called Csound because it is written in C, as opposed to some of its predecessors.

It is free software, available under the LGPL.

Csound was originally written at MIT by Barry Vercoe in 1985, based on his earlier system called Music 11, which in its turn followed the MUSIC-N model initiated by Max Mathews at the Bell Labs. Its development continued throughout the 1990s and 2000s, led by John ffitch at the University of Bath. The first documented version 5 release is version 5.01 on March 18, 2006. Many developers have contributed to it, most notably Istvan Varga, Gabriel Maldonado, Robin Whittle, Richard Karpen, Michael Gogins, Matt Ingalls, Steven Yi, Richard Boulanger, and Victor Lazzarini.

Developed over many years, it currently has nearly 1700 unit generators. One of its greatest strengths is that it is completely modular and extensible by the user. Csound is closely related to the underlying language for the Structured Audio extensions to MPEG-4, SAOL.


Pure-Data

Pure Data (Pd) is a visual programming language developed by Miller Puckette in the 1990s for creating interactive computer music and multimedia works. While Puckette is the main author of the program, Pd is an open source project with a large developer base working on new extensions. It is released under a license similar to the BSD license. It runs on GNU/Linux, Mac OS X, iOS, Android and Windows. Ports exist for FreeBSD and IRIX.

Pd is very similar in scope and design to Puckette's original Max program, developed while he was at IRCAM, and is to some degree interoperable with Max/MSP, the commercial successor to the Max language. They may be collectively discussed as members of the Patcher[2] family of languages.

With the addition of the Graphics Environment for Multimedia (GEM) external, and externals designed to work with it (like Pure Data Packet / PiDiP for Linux, Mac OS X), framestein for Windows, GridFlow (as n-dimensional matrix processing, for Linux, Mac OS X, Windows), it is possible to create and manipulate video, OpenGL graphics, images, etc., in realtime with extensive possibilities for interactivity with audio, external sensors, etc.

Pd is natively designed to enable live collaboration across networks or the Internet, allowing musicians connected via LAN or even in disparate parts of the globe to create music together in real time. Pd uses FUDI as a networking protocol.


PWGL

PWGL is a program that gives the user a graphical interface to doing computer programming to create music. The interface has been designed for musicians, with many objects that allow one to see, hear, and manipulate musical materials. PWGL's interface is similar to other applications, including OpenMusic, Max/MSP, and Pd. It is most similar to OpenMusic, because both share lineage as successors to the 1980s-90s application Patchwork (the PW in PWGL refers to Patchwork.)

For those familiar with Max/MSP or Pd, the biggest difference to know about PWGL is that generally all user patches are organized in the form of a tree, with many computations that happen in the "leaves" and "branches" that feed into one another and end at the bottom of the patch with one object that is the "root." The user activates the patch by evaluating this root object, which then calls all the other objects successively up the tree to the leaves, in a recursive fashion. The outermost leaves then evaluate and feed their results back down. This happens through all levels of the patch back to the root object. When the root object evaluates, it sends the final answer to the user.

Users may evaluate the patch at locations other than the "root" object. The object called for evaluation will call up its own branches and leaves and output its result to the user. Other branches of the patch will not evaluate, nor will levels of the patch below this node. To evaluate an object, select it and hit 'v' (for "eValuate"!). Instructions for how to select objects are below.


from my perspective including any of these environments in a webvr framework would be highly benefitial because:

  • it would allow for a level of control over sound, specially for things like procedural audio, that you cannot achieve with standar midi and pre-rendered audio stuff

bearing this in mind:

  • i think you would benefict extensively of such a thing

so, my question is

  • why not considering wave field syntheis for vr both in stereo emulaton, and as a possibility for doing large scale speakers clusters arrays?

Some content on wfs

WFS is based on the Huygens–Fresnel principle, which states that any wave front can be regarded as a superposition of elementary spherical waves. Therefore, any wave front can be synthesized from such elementary waves. In practice, a computer controls a large array of individual loudspeakers and actuates each one at exactly the time when the desired virtual wave front would pass through it.

The basic procedure was developed in 1988 by Professor A.J. Berkhout at the Delft University of Technology.[1] Its mathematical basis is the Kirchhoff-Helmholtz integral. It states that the sound pressure is completely determined within a volume free of sources, if sound pressure and velocity are determined in all points on its surface.

Therefore, any sound field can be reconstructed, if sound pressure and acoustic velocity are restored on all points of the surface of its volume. This approach is the underlying principle of holophony.

For reproduction, the entire surface of the volume would have to be covered with closely spaced monopole and dipole loudspeakers, each individually driven with its own signal. Moreover, the listening area would have to be anechoic, in order to comply with the source-free volume assumption. In practice, this is hardly feasible.

According to Rayleigh II the sound pressure is determined in each point of a half-space, if the sound pressure in each point of its dividing plane is known. Because our acoustic perception is most exact in the horizontal plane, practical approaches generally reduce the problem to a horizontal loudspeaker line, circle or rectangle around the listener.

  • The origin of the synthesized wave front can be in any point on the horizontal plane of the loudspeakers. It represents the virtual acoustic source, which hardly differs from a material acoustic source at the same position. Unlike conventional (stereo) reproduction, the virtual sources do not move along if the listener moves in the room. For sources behind the loudspeakers, the array will produce convex wave fronts. Sources in front of the speakers can be rendered by concave wave fronts that focus in the virtual source and diverge again. Hence the reproduction inside the volume is incomplete - it breaks down if the listener sits between speakers and inner source.*

well, I have nothing against google, they make really outstanding stuff:

  • chrome browser for instance, has a pretty solid architecture, and is one of the state of art browsers available these days.
  • i would specially regard this as it is opensource, so you can touch the code and making modifications to it.

from my perspective, working with pwgl, csound, libpd and supercollider, within the context of vr may, within thyself, may be benefitial, as:

  • these are state of the arts environments for things like sound synthesis and so on so forth.

and better:

  • most of them are available with ports for openframeworks, which supports out of the box emscripten compilation, so it shouldn't be that difficult difficult to bind those to javascript.

in fact: supercollider:

  • has clients for both node, and a language side implementation for javascript (i've included links). bearing this in mind,

i think it could, within thyself, be a good option to at least consider:

  • some form of binding these kinds of things to webvr.
  • you guys and everyone in the community could benefict of it into a larger extent, so, why not?

Where to host GA? (also: Enable GitHub Pages?)

Perhaps I’m unclear on how to load these examples outside the native apps. Is it helpful to have https://mozilla.github.io/webxr-polyfill/examples/ar_simplest.html, for example, be accessed by URL?

My reason for asking: @blairmacintyre wants Google Analytics to be added to the Polyfill.

  1. From where should the GA JS snippets be hosted and loaded?
    • From the Polyfill.js script?
    • Are we looking for GA from third-party sites or for these examples only?
  2. Would it be better to just add GA to the test iOS app to use Google’s iOS GA SDK?

Let me know if I’m missing something. Thanks!

Finish the WebRTC case for AR

The camera video and 3D scene are not properly aligned.
The camera video is not properly resized and positioned to fill a handset's screen.

Make the polyfill match the WebXR Devices API as closely as possible

Now that the CG has a plan for WebXR (including using the XR namespace) we need to make our polyfill as close as possible to the first version train (basically VR support) and separate out the bits that are experimental and for the second version train (the AR bits).

For v1 branch:

  • Rename XRDisplay to XRDevice
  • Restore stage coordinate system
  • Add XRFrameOfReference with stage bounds
  • Restore session setup
  • Double-check that layer setup is draft consistent
  • Implement all of the events
  • Remove the need to sleeps

For vNext branch:

  • Anchors
  • Geospatial coordinate systems
  • Additional session setup parameters
  • Floor anchors (perhaps rename to foot level?
  • Light estimates
  • Point cloud

Clarify Anchor vs AnchorOffset in the API

Some apis that reference an "Anchor" are actually expecting or returning an XRAnchorOffset or a string, and other apis either expect or return an XRAnchor, which seems confusing.

For example, here are my immediate expectations of various apis just based on the names of the methods, in comparison to the actual implementation:

XRPresentationSession.anchors
Expectation: a sequence of XRAnchor instances
Implementation: Works as expected

XRPresentationSession.addAnchor
Parameter Expectation: Accepts an instance of XRAnchor.
Implementation: Accepts a coordinate system, position, and orientation, and returns a string (a uuid for an Anchor). (No XRAnchor instance is passed or returned at all!). Also, the name addAnchor implies that we are adding an existing instance of an XRAnchor (rather than creating a new instance), so I think it would be better named createAnchor.

XRPresentationSession.removeAnchor
Expectation: Accept an instance of XRAnchor.
Implementation: Accepts an anchor uid (string). Rename to removeAnchorByUid ?

XRPresentationSession.getAnchor
Expectation: Should return a an instance of XRAnchor
Implementation: Somewhat works as expected, but can be made clearer if the method was named getAnchorByUid

XRPresentationSession.findAnchor
Expectation: Should return a promise that resolves to an instance of XRAnchor
Implementation: Returns a promise that resolves to an instance of XRAnchorOffset, which is NOT an XRAnchor (does not inherit from XRAnchor). If this is going to return an XRAnchorOffset, the name should probably be changed to findAnchorOffset or something similar.

XRPresentationSession.findFloorAnchor
Expectation: Should return a promise that resolves to an instance of XRAnchor
Implementation: Returns a promise that resolves to an instance of XRAnchorOffset, which is NOT an XRAnchor (does not inherit from XRAnchor). Like the previous one, should probably be changed to findFloorAnchorOffset

Hook the WebXR polyfill anchor APIs to the ARKit anchors.

The polyfill XRAnchors are currently stubbed out and just keep track of XRAnchors without linking them to ARKit anchors or providing hit testing in any way.

Update XRPresentationFrame.findAnchor and .addAnchor to use ARKit if available.

Handle the case when anchors are no longer trackable

In world-scale apps, existing anchors will be become untrackable in some cases like when the user walks out of a room. Right now, the polyfill assumes that anchors are permanently trackable and has no way to indicate otherwise.

Add a flag to anchors aor fire an event when they become untrackable.

Create XREffect and XRControls for THREE.js

For devs who don't want to use the XRExampleBase, but do use THREE, it takes a fair amount of work to get going with WebXR.

Ease the THREE.js devs' lives by creating an XR equivalent to VREffects and VRControls.

Implement the geospatial XRCoordinateSystem

The XRCoordinateSystem is implemented for stage, eyeLevel, and headModel but type geospatial is stubbed out.

Implement the WSG84 geodetic frame calculations to position a given latitude / longitude / altitude so that the origin and orientation are positioned on the plane tangent to the geodetic frame with X, Y Z corresponding to East, Up, South.

Implement the transformation calculation between geospatial coordinate systems and the other types.

Create a test suite for ARKitWrapper

The iOS ARKit platform wrapper, in ARKitWrapper.js, has no tests that check the iOS <-> JS bridge for creating and tracking anchors and receiving pose information.

Create a test suite that automates testing of ARKitWrapper.

Turn on the recording UI when on the iOS app

Currently we don't pass the right flags so that when the user touches the eye icon they get access to the photo and video recording UI.

Pass the correct initialization flags to ARKit to turn on the recording UI.

Add a Platform Abstraction Layer

Currently, platform logic is dispersed throughout the polyfill code, in sections that are separated by if statements such as

                if(this._vrDisplay !== null){ // Using ARCore
			...
		} else if(ARKitWrapper.HasARKit()){ // Using ARKit
			...
		} else { 
                        ...
                }

I think a platform abstraction layer would simplify things. This platform abstraction can be associated with XRDisplays:

XRDisplay {
   _platform: PlatformAbstraction;
}

The XRDisplay should be able to choose the platform, and Realities should use whichever platform is exposed by the XRDisplay (without having to know anything about the platform implementation). This will also make it much easier to extend the polyfill to work with new platforms (especially without having to modify Reality implementations).

Fix grey flashing on tap in webkit

Add -webkit-tap-highlight-color: rgba(0, 0, 0, 0) to the session layer's canvas to prevent the default webkit behavior of flashing grey on taps.

Change how stage features are exposed

In AR, the idea that the local area consists of a flat floor (aka stage) will often not be true. In VR, the platforms may expose a stage center in the tracker coordinate system and may expose a 2D polygon that determines the clear space in which to move.

Expose a 'floor' XRAnchor that is always at floor level (as near as the platform can determine) underneath the current head pose.
Expose the VR style stage info (center point, polygon) through API on XRSession, and remove the 'stage' XRCoordinateSystem.
Update the examples to use the floor anchor.

View Animation gtlf

Hi there! I am trying to see an animation that I uploaded to the AR hit test test. I uploaded the gltf with an animation attached, however it does not play when I open the scene. It is a static object. Do I have to add something or delete something to/from the code to get it working?

Support "webvr polyfill" style virtual reality on mobile browsers with a native WebVR

Want to get the cardboard-style VR supported on mobile for doing stereo VR.

At this point, it is unclear if we want to support stereo AR on mobile devices with only one camera: in Argon4, we opted for "stereo" where the same video image is shown on both eyes. That may be sufficient for simple demos, but has issues when an object is up close.

demo description overlay

Each demo should have a div overlay with a description of what the demo does. When the user taps on the div it should disappear.

Turn XRExampleBase into a more complete base class for production apps.

XRExampleBase is good for the examples, but we need something for devs to use in production code.

The new class should contain the boilerplate like XRExampleBase, but also:

  • Absorb the tricky bits for creating and finding anchors, then update node poses as new anchor data comes in.
  • Make detection and selection of XRDisplay more straight-forward and flexible than just is/is not VR.
  • Provide apps with the ability to easily add to a group linked to each of the major coordinate systems (eye level, head, stage, geospatial, ...)
  • Fire events for life cycle events like device detection and session creation so that the app can use it in a reactive manner

In examples, show a message for unsupported browsers

The examples currently don't show any indication if features like tracking (ARKit, ARCore, orientation events, etc) aren't supported.

Show a message that the current browser isn't supported and information on how to get a supported browser.

Simplify application logic by exposing XRAnchor-related methods in Reality

There seems to be a lot of complexity created by having the XRAnchor creation/finding methods inside XRPresentationFrame.

For example, in the example code, we have to keep track of whether or not we already requested the floor anchor, since we can only get the floor anchor via the frame instance, and thus inside the frame callback:

// If we haven't already, request the floor anchor offset
if(this.requestedFloor === false){
  this.requestedFloor = true
  frame.findFloorAnchor('first-floor-anchor').then(anchorOffset => {
    if(anchorOffset === null){
      console.error('could not find the floor anchor')
      return
  }
  this.addAnchoredNode(anchorOffset, this.floorGroup)
  }).catch(err => {
    console.error('error finding the floor anchor', err)
  })
}

More so, this logic is incorrect, since it does not handle the case when the reality is changed, which would cause all the XRAnchors to become invalid (right?).

Can we just expose the various Anchor-related properties and methods (anchors, getAnchor, findAnchor, findFloorAnchor, etc.) in the Reality class so that these things don't have to be called within a frame callback that is executed repeatedly, thus eliminating the need for things like the requestedFloor flag above? For example, the anchor can easily be requested every time the Reality changes, which is the proper way to do it (assuming all anchors are invalid after a reality changes):

session.addEventListener('realitychanged', () => {
  session.reality.findFloorAnchor('first-floor-anchor').then(anchorOffset => {
    if(anchorOffset === null){
      console.error('could not find the floor anchor')
      return
    }
    this.addAnchoredNode(anchorOffset, this.floorGroup)
  }).catch(err => {
    console.error('error finding the floor anchor', err)
  })
})

More so, we see this pattern of delaying the creation of anchors in the anchor sample:

addAnchoredModel(sceneGraphNode, x, y, z){
  // Save this info for use during the next render frame
  this.anchorsToAdd.push({
    node: sceneGraphNode,
      x: x, y: y, z: z
  })
}

// Called once per frame
updateScene(frame){
  const headCoordinateSystem = frame.getCoordinateSystem(XRCoordinateSystem.HEAD_MODEL)
  // Create anchors and start tracking them
  for(let anchorToAdd of this.anchorsToAdd){
    // Create the anchor and tell the base class to update the node with its position
    const anchorUID = frame.addAnchor(headCoordinateSystem, [anchorToAdd.x, anchorToAdd.y, anchorToAdd.z])
      this.addAnchoredNode(new XRAnchorOffset(anchorUID), anchorToAdd.node)
    }
  this.anchorsToAdd = []
}

This can be simplified to:

addAnchoredModel(sceneGraphNode, x, y, z){
   const anchorUID = session.reality.addAnchor(headCoordinateSystem, [anchorToAdd.x, anchorToAdd.y, anchorToAdd.z])
   this.addAnchoredNode(new XRAnchorOffset(anchorUID), anchorToAdd.node)
}

Write an example UI for picking an XRDisplay and switching to a new XRDisplay

The examples currently default to using the first XRDisplay that matches the session initialization parameters. This will often be the wrong choice when the device has more than one display, for example a flat display and a VR HMD display.

Implement an example UI widget that allows the apps to offer up the available displays and, during a live XRSession, switch to a new display.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.