Git Product home page Git Product logo

model-viewer's Introduction

The <model-viewer> project

This is the main GitHub repository for the <model-viewer> web component and all of its related projects.

Getting started? Check out the <model-viewer> project!

The repository is organized into sub-directories containing the various projects. Check out the README.md files for specific projects to get more details:

๐Ÿ‘ฉโ€๐Ÿš€ <model-viewer> โ€ข The <model-viewer> web component (probably what you are looking for)

โœจ <model-viewer-effects> โ€ข The PostProcessing plugin for <model-viewer>

๐ŸŒ modelviewer.dev โ€ข The source for the <model-viewer> documentation website

๐Ÿ–ผ render-fidelity-tools โ€ข Tools for testing how well <model-viewer> renders models

๐ŸŽจ shared-assets โ€ข 3D models, environment maps and other assets shared across many sub-projects

๐Ÿš€ space-opera โ€ข The source of the <model-viewer> editor

Development

When developing across all the projects in this repository, first install git, Node.js and npm.

Then, perform the following steps to get set up for development:

git clone --depth=1 [email protected]:google/model-viewer.git
cd model-viewer
npm install

Note: depth=1 keeps you from downloading our ~3Gb of history, which is dominated by all the versions of our golden render fidelity images.

The following global commands are available:

Command Description
npm ci Install dependencies and cross-links sub-projects
npm run build Runs the build step for all sub-projects
npm run serve Runs a web server and opens a new browser tab pointed to the local copy of modelviewer.dev (don't forget to build!)
npm run test Runs tests in all sub-projects that have them
npm run clean Removes built artifacts from all sub-projects

You should now be ready to work on any of the <model-viewer> projects!

Windows 10/11 Setup

Due to dependency issues on Windows 10 we recommend running <model-viewer> setup from a WSL2 environment.

And installing Node.js & npm via NVM

You should clone model-viewer from inside WSL, not from inside Windows. Otherwise, you might run into line endings and symlink issues.
To clone via HTTPS in WSL (there are known file permissions issues with SSH keys inside WSL):

git clone --depth=1 https://github.com/google/model-viewer.git
cd model-viewer
npm install

To run tests in WSL, you need to bind CHROME_BIN:

export CHROME_BIN="/mnt/c/Program Files/Google/Chrome/Application/chrome.exe"
npm run test

Note that you should be able to run the packages/model-viewer and packages/model-viewer-effects tests with that setup, but running fidelity tests requires GUI support which is only available in WSL on Windows 11.

Additional WSL Troubleshooting โ€“ provided for reference only

These issues should not happen when you have followed the above WSL setup steps (clone via HTTPS, clone from inside WSL, bind CHROME_BIN). The notes here might be helpful if you're trying to develop model-viewer from inside Windows (not WSL) instead (not recommended).

Running Tests

Running npm run test requires an environment variable on WSL that points to CHROME_BIN. You can set that via this command (this is the default Chrome install directory, might be somewhere else on your machine)

export CHROME_BIN="/mnt/c/Program Files/Google/Chrome/Application/chrome.exe"
npm run test

Tests in packages/model-viewer and packages/model-viewer-effects should now run properly; fidelity tests might still fail (see errors and potential workarounds below).

Error: /bin/bash^M: bad interpreter: No such file or directory

Symptom Running a .sh script, for example fetch-khronos-gltf-samples.sh, throws an error message /bin/bash^M: bad interpreter: No such file or directory

Alternative error:

! was unexpected at this time.
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! @google/[email protected] prepare: `if [ ! -L './shared-assets' ]; then ln -s ../shared-assets ./shared-assets; fi && ../shared-assets/scripts/fetch-khronos-gltf-samples.sh`

Solution This is caused by incorrect line endings in some of the .sh files due to git changing these on checkout on Windows (not inside WSL). It's recommended to clone the model-viewer repository from a WSL session.

As a workaround, you can re-write line endings using the following command:

sed -i -e 's/\r$//' ../shared-assets/scripts/fetch-khronos-gltf-samples.sh

Error: ERROR:browser_main_loop.cc(1409)] Unable to open X display.

Symptom When trying to npm run test, errors are logged similar to:

โŒFail to analyze scenario :khronos-IridescentDishWithOlives! Error message: โŒ Failed to capture model-viewer's screenshot
[836:836:0301/095227.204808:ERROR:browser_main_loop.cc(1409)] Unable to open X display.

Pupeteer tests need a display output; this means GUI support for WSL is required which seems to only be (easily) available on Windows 11, not Windows 10.
https://docs.microsoft.com/de-de/windows/wsl/tutorials/gui-apps#install-support-for-linux-gui-apps

So, the workaround seems to be running Windows 11 (but not tested yet).

Error: ERROR: Task not found: "'watch:tsc"

Symptom Running npm run dev in packages/model-viewer on Windows throws error ERROR: Task not found: "'watch:tsc".

Solution (if you have one please make a PR!)

model-viewer's People

Contributors

bhouston avatar bsdorra avatar cdata avatar chrismgeorge avatar dependabot[bot] avatar diegoteran avatar e111077 avatar elalish avatar futahei avatar gkjohnson avatar hjeldin avatar hybridherbst avatar jsantell avatar klausw avatar lucadalli avatar mikkoh avatar milesgreen avatar mqg734 avatar mrdoob avatar prideout avatar pushmatrix avatar samaneh-kazemi avatar smalls avatar srirachasource avatar stevesan avatar sun765 avatar takahirox avatar yiyix avatar yuinchien avatar ziyanma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

model-viewer's Issues

Decide on licensing model to use

Current options on the table include:

  • Google licensing model (currently in place)
  • Polymer licensing model (BSD 3 Clause, etc)

Target Magic Leap's ml-model

This may be an addon/mixin that adds a new attribute (some version of source-magicleap, magicleap-source, or source-fbx), and abstracts the <ml-model> tag (passing down parameters from the enclosing <xr-model> into an <ml-model> nested into a shadow root).

Documentation/Website

For release, we need:

  • Docs (can be README, or on website) -- any patterns/tools for documenting web component APIs?
    • APIs, caveats, the polyfill story, compatibility matrix, best practices
    • #8
  • Ensure all example pages work, and ideally some nice way of navigating them (e.g. three.js demos)
  • Maybe a gif for the README

Limit OrbitControls?

Related to #56 maybe, but perhaps we can consider only allowing rotation on the Y, and a limit on zooming (or removing, or using a different gesture due to #56?) -- when using a provided image, it's possible to zoom out too far seeing the skysphere, and possible to zoom in too close and the near plane kicks in, or 'get lost' so to speak with no way of resetting.

Require 'src' attribute

For inline, WebXR, and Magic Leap, we'll need a glTF/glb file. There's a scenario where only the USDZ is provided, with a poster, which is more or less no different from iOS's native AR Quick Look.

This may already work due to the mixin pattern, but something we should confirm, and/or clarify in docs.

Remove <source> configuration API

It's pedantically correct but potential confusing - including requiring users to know/specify mime-types).

Instead, use a simpler, attribute-based approach. The main glTF resource can remain source, and we can have other optional attributes for source-usdz and potentially source-ml (or source-fbx).

glTF spec support

We'll be using GLTFLoader for this, which handles both, but maybe we should be clear on what we're supporting. Related to dynamic lights (#34) which 2.0 has a different light extension.

The model is not centered correctly

I have a 3D model and when I tried to load it with <xr-model> it renders at the bottom and a portion of the model gets cut off. See this screenshot:

screen shot 2018-10-26 at 10 45 35 am

The same model renders nicely in https://gltf-viewer.donmccurdy.com/ which is what I would expect.

screen shot 2018-10-26 at 10 42 51 am

I tried a few 3D models in poly.google.com and they all seem to render a bit off in <xr-model>. It would be nice if the <xr-model> can render the model without needing to adjust it by hand.

Gate WebXR support on supportsSession

Even with WebXR flags enabled/ARCore installed, the ARCore bindings do not ship in Chrome release or beta. This means checking the existence of XRSession.prototype.requestHitTest is insufficient, which will exist when the flags are enabled, even in release. This test was originally used since the initial releases of AR support did not have any flags to distinguish an AR session from a typical VR session, but now there is the temporary session option environmentIntegration that we can use.

requestSession requires a user gesture, but we can call supportsSession({ environmentIntegration: true }) outside of a gesture to test existence of AR support (if WebXR is detected). May also need to pass in the XRPresentationContext in the outputContext session config key.

Related bug: https://bugs.chromium.org/p/chromium/issues/detail?id=898980

Revisit poster & preload behavior

This one was tricky earlier and never was fully happy with a solution.

Models can be large, and may want to lazy load the models until an interaction occurs. We can display a poster image in the mean time, but if preload is the default, our "out of the box" model (<xr-model src="..."></xr-model>) would not preload, or if it does preload, it doesn't have a poster to display. We could have some smart options (like never preloading on mobile). But balancing the out of the box experience default params with the ideal user experience is something we'll need to figure out.

Related is what kind of messaging is provided if the user needs to interact with the element before loading a model? "Click to display model"?

Create a generalized notion of 'views'

It should be possible to create a new type of view (e.g., VR, off-screen rendering) as mixins.

Consider renaming 'views' to 'modes' - views is overloaded, and we already refer to these as modes in some of the docs.

Note that it may make sense to have tasks or new issues to create those mixins when we pick this work up.

`auto-rotate` and `controls` at odds

How do we want this interaction to work if both values are set on a model?

  • Does the model continue spinning even after interacting with orbit controls?
  • Can only one exist at the same time?
  • Does the model auto-rotate until interacting with it, then orbit controls take over, and then after inactivity, go back to rotating? Do we switch back to the lazy susan camera view, or start from where the user left off via the controls?

Simple editor/composer

We've been calling this a composer, but we need a simple way to at least:

  • Import multiple models, and export as a single model
  • Perform simple movements (changing the origin, scaling)
  • Preview how the model will appear embedded or in AR

iOS Quick Look - make sure that it works as expected, and that the caveats outlined here are documented in the README

Meta bug, can break out if not handled here. Taken from current readme:

  • There is currently no way to tell whether an iOS device has AR Quick Look support. Possibly check for other features added in Safari iOS 12 (like CSS font-display): https://css-tricks.com/font-display-masses/ Are there any better ways?

  • Since there are no USDZ three.js loaders (and seems that it'd be difficult to do), Safari iOS users would either need to load a poster image, or if they load the 3D model content before entering AR, they'd download both glTF and USDZ formats, which are already generally rather large. Not sure if this is solvable at the moment, so we may need to just document the caveats.

  • With native AR Quick Look, the entire image triggers an intent to the AR Quick Look. Currently in this component implementation, the user must click the AR button. Unclear if we want to change this, as interacting and moving the model could cause an AR Quick Look trigger. Do we want this to appear as native as possible?

  • The size of the AR Quick Look native button scales to some extent based off of the wrapper. We could attempt to mimic this, or leverage the native rendering possibly with a transparent base64 image. I've played around with this, but someone with an OSX/iOS device should figure out the rules for the AR button size. I did some prelim tests here: https://ar-quick-look.glitch.me/

We should use lit-element's `UpdatingElement` as the base class

Refactor renderer to handle different strategies

Right now, there are cases where each model having its own GL renderer is more performant than using one renderer (via #50) blitting to 2D canvases. Most cases, actually, but harder to test without a way to flip between rendering strategies.

While working on environment maps, each model will need access to whatever renderer its using to create the appropriate maps, which prompted this issue.

Rename xr-model to model-viewer throughout project

While our vision includes a generic component that supports multiple views (embedded, AR and VR), this first release will support only embedded and AR.

Is there room for a specific embedded and AR component, perhaps with a simpler interface, and show we adjust the name to reflect our initial focus - perhaps to ar-model?

Allow users to bring their own lights

While some content publishers want full control over the scene, there are specifically perf costs for things like shadows and lights. Do we want something like allow-model-lights to import dynamic lights that are found in a glTF model? Would this enable shadows as well? Should this be disabled, even if set, on mobile?

Externalize dependency on polyfills

The content author should decide if this is the correct polyfill, or if one's needed at all depending on which browsers they're targeting.

WebXR+AR experience design

The current/previous WebXR+AR experience was the most crude thing imaginable: a circle appears on a surface when found and tapping places the model there. No other visuals or indications. Very MVP. Open questions surrounding this, and what we need or can do for an initial release:

Stabilizing notification

In the WebXR+AR codelab, a gif is displayed to users indicating they should "look around" with their device as a pose is found. This can take a few seconds in some environments on some phones.

We use this gif with the caveat that there's some compositing jank with it.

Instructions

In the Chacmool AR experience, instructions are rendered to describe how to interact.

Footprint Reticle

In the Chacmool AR experience, the reticle becomes the footprint of the model, and displays additional information regarding the dimensions (once placed, see the other Chacmool image here).

Interaction Controls

Apparently it's difficult to touch the screen and press two buttons to take a screenshot, but you can see some styled touch marks when interacting, and can slide (translate) and rotate the model (via two finger rotate). These interactions are/were non-trivial to get working, probably out of scope for MVP. I think the "pin" button halts the movements.

Exit controls

This seems easy and something we should probably have. The "X" in the upper left in Chacmool example.

'background-image' feature?

We have in previous iterations offered a background-image attribute that allows setting an equirectangular image to be used as the skybox, as well as an environment map. Is this something we want?

Add ImageBitmapRendering + OffscreenCanvas (on main thread) rendering mode

While we could use ImageBitmapRenderingContext in the main thread and blit textures using that instead of Canvas2DRenderingContexts in a more efficient way, the bitmap rendering and offscreen canvas APIs are related and if supported, should both be available, meaning we can always do rendering in a worker for this codepath. We could transferControlToOffscreen for each model element, and then we'd have multiple WebGL contexts in a worker, or multiple workers, but unclear if that's more performant than a single context in a worker sending out textures. Also if one can crop the bitmaps created from there.

More info: #10 (comment)

Dependent on #10 in some cases, but after writing this, now I'm not sure.

CDN release, provided builds

We discussed that having a hosted version of the component on a CDN that could be easily included would be helpful.

There are a few use cases we should support:

  • A fully bundled release (including all polyfills and three.js)
  • An unbundled release (for users who already have three.js)
  • A specifically versioned URL for stability
  • A 'latest stable' URL to always get the latest & greatest

Where to host this is TBD. We should include links in the README so this is discoverable.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.