Needs a testing room for devs so we don't accidentally intrude on any research or workshop lessons :) Currently, we are all within the default explore room.
This world will contain the 'shoot and drop' experiment, where an object is dropped at the same time as an identical object is ejected horizontally (the objects hit the ground at the same time despite their different initial velocities).
This world can contain the experiment itself, as well as a portal back to the physics hub.
It appears that the recent standalone browsers (Oculus Browser and Firefox Reality) will only properly report world transforms ( aframevr/aframe#4626 ), and allow parenting to the camera node ( aframevr/aframe#4629 ), when in 2D mode, and not on immersive "VR" mode when using real-time mirrors. This appears to be due to creating a virtual camera for additional renders.
Need to test with a the recent pure three.js implementation from here ( mrdoob/three.js#19666 ) to determine if the A-Frame layer is responsible.
Need to explore why shadow-mapping virtual cams do not appear to result in a similar issue.
It appears that once in a while a "this.attributes. ..." error in the A-Frame text component. Often this only happens on the initial load and any reloads afterwards do not show the error. having trouble reproducing but want to note it here.
It appears to have popped up after making geometry that includes a text component to make buttons.
Each physics object in the experiments needs to be networked so that it is visible across all connected clients.
This will require designating one client (possibly the one who initiates the experiment) as the 'owner' of the objects. The owner could then send the position and rotation of each object to other clients.
This world contains the 'shoot the monkey' experiment, which demonstrates that a bullet shot at a monkey who is dropped at the same time of the shot is always hit if the bullet is aimed directly at the monkey.
The world can contain the experiment itself as well as a portal back to the physics hub.
Something like Mozilla hubs has. A way to use a "magic link" to create anonymous user that can have a temp name and appearance set to be valid for one day maybe?
Maybe just a registered user that auto-delete from Mongo DB after a certain time period
In the research room, for example the targets' positions are not synched so they all just sit on the origin even though they present properly on the "participant user's end".
Only have one user in charge of creating the objects. Perhaps a flags that generates them in a component when a "teacher" user enters. This could be problematic if more than one teacher enters however ...
Some wacky solution whereby we check if artefacts are already in. If not we create them. Otherwise we don't.
Come up with some sort of server-side solution that could be a start toward creating persistent layers whereby objects are mapped in the scene and saved to a database that is called and populates the scene during login. This is something we will need to eventually figure out to allow the creation of personal and shared spaces that will remain in a modified state even after logout.
Right now many of the components do not follow strict coding patterns in making sure they can be attached and removed without leaving a eventListeners lying around. Probably some stray comments and unused code lying around too ...
Within the explore.pug file we have a list of worlds. It would be nice to have these list of links autogenerated by the server when we start it. Maybe something in controller.js.
Note that each world has an unused settings.JSON file within that could potentially be leveraged.
This would also be a nice time to switch from links to also including images to each world.
Currently, when using the Janus-gateway server and naf-janus-adapter rooms are not being emptied on disconnection and thus get full after a while. This results in no new connection being accepted and the janus server needing to be restart.
With some API simplification Circles readme needs some updating to highlight any changes.
Additionally, perhaps it is time to now consider having. docs folders with .md files for each component describing their purpose i.e., like the A-frame library does ...
Larger tablets can be tiring to hold up, but using the device to look around can be fun and practical. A possible solution might be to introduce a basic button that lets you switch between magic-window mode and non-magic-window mode (can only scroll left/right on iPad i.e., same as mode when we do not allow access to sensors).
We may even want to consider eating a new look-controls that allows users to scroll around left/right and up/down.
The indicator will consist of a cube (or shape) that is coloured depending on the state of the experiment:
White: idle
Red: experiment running
Green: object has hit the ground
There will also be a timer on the indicator which will show how much time it took for the object to hit the ground. The timer will elapse as the experiment runs, stopping once the object hits the ground.
There will be two indicators in the scene: one for each object.
We need a way to allow creativity and allow users to create messages and/or art to be placed in worlds, while also being persistent i.e., leaving messages for others to find. Related to #88