Git Product home page Git Product logo

roomalivetoolkit's Introduction

RoomAlive Toolkit README

The RoomAlive Toolkit enables creation of dynamic projection mapping experiences. This toolkit has been in internal use at Microsoft Research for several years and has been used in a variety of interactive projection mapping projects such as RoomAlive, IllumiRoom, ManoAMano, Beamatron and Room2Room.

The toolkit consists of two separate projects:

  • ProCamCalibration - This C# project can be used to calibrate multiple projectors and Kinect cameras in a room to enable immersive, dynamic projection mapping experiences. The codebase also includes a simple projection mapping sample using Direct3D.
  • RoomAlive Toolkit for Unity - RoomAlive Toolkit for Unity contains is a set of Unity scripts and tools that enable immersive, dynamic projection mapping experiences, based on the projection-camera calibration from ProCamCalibration. This project also includes a tool to stream and render Kinect depth data to Unity.

Here is an example scene from our RoomAlive project to illustrate what is possible (this one uses 6 projectors and 6 Kinect cameras):

RoomAlive Scene

Development Status

This project is under development. The current release is in beta and all APIs are subject to change. The next major features might include compute shaders for handling all depth meshes as a single unified vertex buffer, radiometric compensation across dynamic scnenes, support for Unity and Unreal Engines. We welcome contributions!

Citations

The RoomAlive Project was started in the summer of 2013 with a group of superstar interns. If you are looking for a reference to that original work, please cite:

@inproceedings{Jones:2014:RME:2642918.2647383,
 author = {Jones, Brett and Sodhi, Rajinder and Murdock, Michael and Mehra, Ravish and Benko, Hrvoje and Wilson, Andrew and Ofek, Eyal and MacIntyre, Blair and Raghuvanshi, Nikunj and Shapira, Lior},
 title = {RoomAlive: Magical Experiences Enabled by Scalable, Adaptive Projector-camera Units},
 booktitle = {Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology},
 series = {UIST '14},
 year = {2014},
 isbn = {978-1-4503-3069-5},
 location = {Honolulu, Hawaii, USA},
 pages = {637--644},
 numpages = {8},
 url = {http://doi.acm.org/10.1145/2642918.2647383},
 doi = {10.1145/2642918.2647383},
 acmid = {2647383},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {projection mapping, projector-camera system, spatial augmented reality},
} 

Contribute

We welcome contributions to help advance projection mapping research frontier!

  • File an issue first so we are aware about change you want to make and possibly guide you. Please include these log files when you report an issue.
  • Use usual steps to make changes just like other GitHub projects.
  • Clean compile your changes on Windows and test basic operations.
  • When your pull request is created, you might get prompted to one-time sign Contributor License Agreement (CLA) unless changes are minor. It's very simple and takes less than a minute.
  • If your pull request gets a conflict, please resolve it.
  • Watch for any comments on your pull request.
  • Please try and limit your changes to small number of files. We need to review every line of your change and we can't reasonably do that if you make requests with huge number of changes.
  • Do not make just cosmetic changes. We will generally reject pull requests with only cosmetic changes.

License

This project is licensed under MIT license.

roomalivetoolkit's People

Contributors

benlower avatar hbenko avatar microsoft-github-policy-service[bot] avatar thundercarrot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

roomalivetoolkit's Issues

How to apply 3X4 calibration matrix to kinect data?

i get result of calibration and obj file. is the obj file used calibrate process?
the result value is divided to extrinsic and intrinsic. so, i calculate to obtain 3X4 matrix by using
[1 0 0 0, 0 1 0 0, 0 0 1 0] matrix. but the result of fourth column is over 100. if over 100, camera position is
so far from kinect position. bounding box of raw data(kinect data) is not over 10 or 2.
is that right to obtain 3X4 calibration matrix?
how apply calibration matrix to kinect data?
and how to change kinect coordinate to projector coordinate? i did kinect data times inverse of camera pose matrix. but the result is not good.

Solve failed

decoding Gray code images for projector 0, camera 0
elapsed time 1588
projecting depth points to color camera 0
rejected 14.13666% pixels for high variance
elapsed time 1770
projector 0 is seen by camera 0 (20180 points)
elapsed time 1858
calibrating projector 0
RANSAC iteration 0
error = 429.497279415675
RANSAC iteration 1
error = 151.60088993423
RANSAC iteration 2
error = 116.755996115679
RANSAC iteration 3
error = 114.391513594397
RANSAC iteration 4
error = 93.634629536181
RANSAC iteration 5
error = 79.4883445206181
RANSAC iteration 6
error = 97.0091158968851
RANSAC iteration 7
error = 77.5354579378281
RANSAC iteration 8
error = 71.8200025854655
RANSAC iteration 9
error = 60.2050934296484
Solve failed
RoomAliveToolkit.ProjectorCameraEnsemble+CalibrationFailedException: Unable to successfully calibrate projector: 0
at RoomAliveToolkit.ProjectorCameraEnsemble.CalibrateProjectorGroups(String directory) in E:\GitHub\RoomAliveToolkit\ProCamCalibration\ProCamEnsembleCalibration\ProjectorCameraEnsemble.cs:line 668
at RoomAliveToolkit.MainForm.Solve() in E:\GitHub\RoomAliveToolkit\ProCamCalibration\CalibrateEnsemble\MainForm.cs:line 1115
Solve complete

Any ideas?

Thanks!

Surface labelling

Hello @thundercarrot ,

First, let me thanks you for publishing your code !
I read from your RoomAlive paper about "Automatic Scene Analysis" (Hough transform, labelling) and touch detection, but I could'nt find a clue about it within the current master and develop branches. Am I right to think it will be relased later as a part of the unity plugin or did I miss it ?

Regards

Can't find Shader.exe?

I'm trying to run the program but it keeps telling that it can't find shader.exe. Am I missing something here?
fdsfsdfsture

DesktopDuplicationEnabled

Is there a specific window resolution or other settings recommended for using DesktopDuplicationEnabled to project from other applications? I'm targeting with windowPtr, adjusting the sourceRegion nudge values and fov, but am having difficulty aligning the contents of the output window to the bounds of the projection.

Missing file error when buildinging the CalibrateEnsemble project

Hello,

I am trying to build the CalibrateEnsemble and projectionMappingSample projects and am getting the following errors:

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\DepthAndColorGS.cso" because it was not found. CalibrateEnsemble

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\DepthAndColorPS.cso" because it was not found. CalibrateEnsemble

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\DepthAndColorVS.cso" because it was not found. CalibrateEnsemble

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\BilateralFilterPS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\DepthAndColorFloatVS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\DepthAndColorGS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\DepthAndColorPS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\DepthAndColorVS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\DepthAndProjectiveTextureVS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\FromUIntPS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\FullScreenQuadVS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\MeshPS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\MeshVS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\MeshWithTexturePS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\PassThroughPS.cso" because it was not found. ProjectionMappingSample

Could not copy the file "C:\Users\Hal\Downloads\RoomAliveToolkit-master\ProCamCalibration\x64\Debug\RadialWobblePS.cso" because it was not found. ProjectionMappingSample

Can I just get a copy of these missing files and add them in? or am I missing something more fundamental?

Compatibility with other depth cameras

Hey,

I know the tool kit only reads and processes data from the kinect v2, but theoretically speaking, if I were to modify the tool kit to take in data streams from other depth cameras such as Intel's f200, would the calibration math still work?

I can't see any reason why it wouldn't, it doesn't seem like the tool kit relies on the kinect sdk for anything other than reading from the camera streams. I figured I'd try asking here before I tried.

Thanks

UI thread update

Hello,

Can you give you me some guidance on how to update WinForm controls on the projected screen? For example, I have a button in MainForm(pc screen), and after clicking that button, I want to change the text on a label in Form1(projection screen). It works if the button is in Form1, but not when it's in MainForm. I've tried calling invalidate, refresh, update, begininvoke, application.doevents, and background threads, but none of these fixes the problem. This is bizarre.

Thank you.

LOOKING FOR A GOOD INTERN

Are you in a PhD program and want to research interactive projection mapping applications with the RoomAlive Toolkit? We have a conference room that we recently outfitted with 5 projectors and 8 cameras, and we have a few ideas of what to do with this amazing space. Drop me a note if you would be interested in spending the summer at Microsoft Research playing with us!

Andy

Calibration displays on monitor not projector

When I try and run a calibration, the stripes display on the monitor, not the projector. I have been able to properly run a calibration from a different computer that runs Windows 8, but not with this computer that runs Windows 10. When I hit the Setup, then Show Projector Server Connected Displays, the projector is labeled localhost0 and the monitor is labeled localhost1. I have changed the displayIndex to 1 but that does not fix the issue.

Main display and projector resolutions

Hi Andy,

Thank you for your great job!

Looks the main display resolution must be the same as projector resolution in order to have a good result.
I have a system with 5 monitors:

  • two 3840x2160 (both 150% dpi scale)
  • two 1920x1080 (both 100% dpi scale)
  • one 1024x768 (100% dpi scale)

And 2 projectors:

  • 1024x768
  • 1920x1080

The calibrations looks working well in almost any combinations, but re-projected images are matching the captured area for me only if I set:

  • 1024x768 monitor as Main Display for 1024x768 projector as extended desktop
  • a 1920x1080 monitor as Main Display for 1920x1080 projector as extended desktop

Maybe is not really an issue, I am not yet well experienced with RoomAlive. I just had some headaches with finding that.

Thanks and regards,
Iulian

Point cloud registration

Hi,

I want to do registration to the point clouds output from RoomAliveToolkit.
Since I didn't get my second Kinect v2 yet, I took the example provided in Readme for testing.

My process is simply read each mean.ply file below the camera folder,
multiply the position of the points to the depthToColor matrix (found in ensemble.xml),
then multiply all the position of points to the pose of camera (found in ensemble.xml)

The registered result I got is shown below
point_cloud_1
points in red : left camera
points in yellow: center camera
points in blue: right camera

The top view is
point_cloud_2

From the top view, I can see gaps and shift between different sets of point cloud.
Did I do anything wrong about the registration process?

Problem. Where is Projector intrinsic and extrinsic data ?

Hello.

I am trying to projection mapping 3D points in the depth camera to projector .
However, I have faced problem.

I have found same issue here and seen answer.

#13
[[[ I assume you mean a 3D point in the depth camera coordinate frame? This is a simple coordinate transform, i.e. 3D point x' in projector coords is A * x, where A is the projector's 4x4 pose matrix and x is the (homogenous) point in the depth camera. x' can then be projected to projector image coordinates using the intrinsics (look for the 'Project' function in the code). ]]]

In my computer, after calibration, it only shows camera matrix and camera pose like a bottom figure. I thought these are calibration results of projector. However, using these data, 3d points are not projected to object.

image

I use one main display and the projector attached to the Pc.

Many thanks

Still on OBJ import

Seeing some users here have had success importing obj files from the solved calibration to their Unity projects, I'd like to ask if anyone had issues importing the obj files into Unity and how they solved those issues. Unity just hangs when I try, I wouldn't expect a file size of 15mb to give trouble. I'm using the Unoffical RoomAlive Toolkit Unity Integration from https://github.com/Superdroidz/UnityRoomAlive which has worked pretty well up to this point. Any tips would be well appreciated.

Going beyond the toolkit

Hello Andy,

Based on your detailed instruction and code, I have managed to get your projection mapping samples working well. Next, I would like to create an 'virtual window' that updates in real time on my wall.

Currently, I am stuck on the very first step: replacing the "FloorPlan.obj" with my own obj file. Both "view" and "projection" windows are black.

I am using one Kinect and one projector. Live head-tracking is enabled and works fine with "FloorPlan.obj".

Thank you.

Null Reference Exceptions found

Place 1:
https://github.com/Kinect/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/Calibrate.cs#L857

Call Stack:
at RoomAliveToolkit.ConsoleRedirection.Write(Object value)
at System.IO.TextWriter.SyncTextWriter.Write(Object value)
at RoomAliveToolkit.ProjectorCameraEnsemble.CalibrateProjectorGroups(String directory)
at RoomAliveToolkit.MainForm.Solve()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()

Inside projector.calibrationPointSets[camera].pose, mbymWorkMatrix1, squareWorkMatrix1, workColumn1, and workIndx1 are null.

Assume the above error is fixed with the following code:

if (projector.calibrationPointSets[camera].pose != null)
{
Console.Write(projector.calibrationPointSets[camera].pose);
}

Another null pointer exception occurs at place 2:

https://github.com/Kinect/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/Matrix.cs#L430

A is null.

*Edit: Looks like projector.calibrationPointSets[fixedCamera].pose is null at https://github.com/Kinect/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/Calibrate.cs#L902

Call stack:
at RoomAliveToolkit.Matrix.ToMathNet(Matrix A) in h:\RoomAliveToolkit\ProCamCalibration\ProCamEnsembleCalibration\Matrix.cs:line 430
at RoomAliveToolkit.Matrix.Inverse(Matrix A) in h:\RoomAliveToolkit\ProCamCalibration\ProCamEnsembleCalibration\Matrix.cs:line 455
at RoomAliveToolkit.ProjectorCameraEnsemble.UnifyPose() in h:\RoomAliveToolkit\ProCamCalibration\ProCamEnsembleCalibration\Calibrate.cs:line 907
at RoomAliveToolkit.ProjectorCameraEnsemble.OptimizePose() in h:\RoomAliveToolkit\ProCamCalibration\ProCamEnsembleCalibration\Calibrate.cs:line 936
at RoomAliveToolkit.MainForm.Solve() in h:\RoomAliveToolkit\ProCamCalibration\CalibrateEnsemble\MainForm.cs:line 997
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()

Any help would be much appreciated.

Thanks.

Exception on Acquire

Loaded E:\project\calibrate.xml
Acquire failed
System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: Index was outside the bounds of the array. (Fault Detail is equal to An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is:
System.IndexOutOfRangeException: Index was outside the bounds of the array.
at RoomAliveToolkit.ProjectorServer.Size(Int32 screenIndex) in e:\GitHub\RoomAliveToolkit\ProCamCalibration\ProjectorServer\ProjectorServer.cs:line 43
at SyncInvokeSize(Object , Object[] , Object[] )
at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs)
at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)).
Acquire complete

Can't find displays on Windows 10

Hi there, I'm not sure if there is a forum somewhere dedicated to the RoomAliveToolkit.
My problem is that nothing is displayed during the calibration process. When I select Setup -> Show Projector Server Connected Displays nothing is being displayed. In MainForm.ShowDisplayIndexes() two displays are being cycled through, the projector.Client.OpenDisplay() and DisplayName() methods are called without any error for both displays.

My environment is Windows 10 (build 10074) on a Surface Pro 3, I used VS2013 to build the project.

Any idea what's going on there?

Many thanks in advance,
Robert

Auto-save xml after Calibrate-Solve

Hello RoomAlive-team,
Your toolkit is great and has a good potential. What I'd ask is not really an issue, but a request. Please add 1-2 lines of code, so that when the 'Calibrate / Solve' gets invoked, the config-xml to be saved at the end of the calibration automatically. I usually forget it and leave the CalibrateEnsemble without saving the calibration config. Subsequent invocation of Calibrate / Solve causes a crash and then I have to start it all over again. Thank you in advance!

.OBJ file import problem

Hello, when saving the scene in the calibration tool to an .OBJ file, and importing it to Unity, i get the following error:

"ImportFBX Errors:
Couldn't import file C:/Users/Admin/Desktop//Unity Projects/Test/Assets/3dobejct/3newpos2.obj.

UnityEditor.DockArea:OnGUI()"

I have done this before in an older update of the toolkit with out any problems, but now it won't load the object properly.

Any suggestions on why that might be?

The model is build from 3 proj and 3 kinects.

Thanks 👍

One projector isn't displayed(black window)

Hi, I'm testing RoomAlive with 2 kinects and 2 projector.
Calibration process was success.
But, In "projectionMappingSample" code, there is a problem.
UserView is OK. It contains 2 kinect data calibrated. And projector0 is displayed correctly.
But projector1 display black window like below:
projector1_error
What do you think about this problem?
Thanks!

Calibration data within Opengl

Can I plug in the pose of the projector into the OpenGL modelview matrix as-is without any modifications? If not, what modifications would be required?

ProjectionMapping

Hi there,

me and my group are currently trying to use your framework to create a simple game in unity. We were able to calibrate the projector with a single kinect camera and were able to run the projection mapping sample just fine.

Now we tried to fill two walls inside a bookshelf with different colors,
image
We tried to do that by saving the bookshelf geometry as obj-file and load it in unity. Then we put on each wall a plane with a different material. Lastly we put the main camera in an approriate spot and then use the desktopDuplication option in the sample to project the "game" onto the bookshelf.
Unfortunately the colored planes never match the walls in the real world. We tried to adjust the camera but we can never achieve the overlapping we desire(As you can see on the pic: they have the right width, but not the right height). We thought that by only adjusting the camera into the projector position, we would be able to fill the walls of the bookshelf "perfectly". Did we perhaps miss something? The viewport of our Kinect is much higher than that of the projector. Could that be a problem?

[FIXED] Calibration of 3 ProCams fails to "solve" or to connect correctly

EDIT START:
TL;DR: If you have problems with solving. Try placing large rigged light objects (white / yellow / green) in the overlapping area

After talking with our superviser it was revealed to us that the boxes that we used was the problem, as they were completely black. This made them absorb the infra red light, used by the kinects to measure depth. Covering the boxes in white sheets fixed it and the calibration ran through without any problems
EDIT END:

Hey Andy.

We've made a big setup (3 projectors and 3 kinects) and are trying to calibrate a single square room. However our efforts in doing so are giving multiple problems. After an entire day of using Aqure/Solve with different tweeks to our setup (rigied objects in overlapping sections, projector/kinect angles and so on.) we are still not able to calibrate out room. The layout is simple enough that It should be possible, but without further knowledge debugging is getting to the point of randomness.

Here are a link to all out calibrations. http://cs.au.dk/~peterbm/ensembles/ which also includes a picture of out setup taken with my phone, for you to get the grand picture.

Some of them managed to run through the solve phase, but ended with terrible RMS errors (>40) while others simple failed and threw an error at ProjectorCameraEnsemble.cs, line 877, where numCompletedFits must not be 0.

Through trial and error it seems like the center projector fails to calibrate during solve.
Also is there any guidelines / requirements on the order projectors and cameras are written in the ensemble.xml file? Can the cameras be listen in the order "left, center, right" while the projectors are listed "center, left, right" or do their ordering have to be the same?

Any help with calibration of our room is much appreciated.

PS. Inside 3_boxes_ensemble.rar is 2 screenshots of the finished calibration process, showing the left and right pieces barely attached to the center piece.

Unity3D plugin

Hi,

Do you have plan to release a plugin for Unity3D?

360 vs Xbox1

Hi everyone:

I have tried to use a Kinect 360 to no avail; however, I could get the Kit to work with an Xbox One sensor.

Is there any extra thing I should do to get Kinect 360 to work?

Thanks

Problems with multiple projectors

Hi guys,

we are using your framework with 2 projectors and 1 kinect.
The calibration works fine but when we try to start the sample or any other projection the second projector becomes black.
Both projectors are plugged in the same pc and placed verticaly with an slight overlap of the screens.
We've assigned one projector the index 1 and the second the index 2 (0 is the desktop).

Any clues what can be wrong?

Problem. Stripes are projeced to desktop monitor not projector

Hello.

Firstly, thanks for sharing this nice toolkit.
During calibration, I have faced a problem.

Visual studio2013, Kinect SDK 2.0 are used

I use one main display and the projector attached to the Pc.
I checked main display being 0 and projector display being 1 .
However, whenever I click 'Acquire', stripes are always shown in main display..
And then calibration is fail again and again.

Many thanks.

.dll Issues and Samples

A colleague and I are working on getting RoomAlive set up, we're starting with the default single projector/kinect setup as is in the ReadMe. The process goes along and at random points we will get a Microsoft .NET Framework error that points to line 854 in the Ensemble's main config or files missing entirely. We have extracted the files, re-extracted them, rebuilt numerous time and this error will still come up at random times. At one point, when we finally got far enough, we opened up the sample projects and managed to get the 3D object to display. We were UNable to get the Wobble effect to work and, again, received the error in Ensemble before we were able to test the rest of the samples.

Is anyone else experiencing these problems or know of a suggestion we might try to either avoid or entirely fix this issue in the future? Thanks.

Undesired skew when objects gets moved outside the main ProCam unit

The Problem:
In our current setup, we have 3 ProCam units, that all face flat white walls. We have calibrated the room successfully using big white boxes for non planar surfaces, but once we start to move the objects around the walls it didn't behave as expected.

From our understanding, a setup as ours would translate to a very wide desktop in the x direction, with a normal hight in the y direction. Instead it seems like the perspektive is all wrong as the objects move up at a 45 degree angle once they leave the center projector. We wonder if this is due to the location of our ProCam units during the calibration or our Kinect used for user tracking.

Here is a video of out setup with objects being moved around, showcasing the problem.
https://youtu.be/qOLY7pav-8E
The problem is showed about halfway through the video and the Kinect used for hand tracking is visible and lit up in when I start to move the object around.

And here is a link to our calibration so you can take a look at the generated data
http://cs.au.dk/~peterbm/skew/3_new_pos2/

The Desired Feature:
For my final setup I would prefer a "wide desktop" where I can move objects around, solely by translation in the x and y coordinate, without having to rotate anything. If this is not possible, then any hints as to how to achieve this would be much appreciated, even if the solutions turns to hard coding some rotation based on coordiantes

How to obtain 3d data of the room model after the calibration

Hi,

I am thinking about getting 3d data of the room model instead of using a exported .obj file to render it in unity, since I want to make it more interactive and reflect everything in unity when running the roomalive.

Which function call should I use?

Thanks.

Calibration data understanding.

Hello there,
thanks for such helpful toolkit.
I got the calibration and now we have that calibration file. I am trying to understand the calibration data. I just want to confirm that I got it understand correctly.
So you first you have camera pose matrix which is 4x4, this signifies the camera and universal co-ordinates system. Later color camera matrix(3x3-intrinsic) same with the depth color camera(3x3-intrinsic). And same with projector data(I am using only one pair of camera and projector). At the end there is projector pose matrix(4x4) which is co-ordinate system of projector.
I want to map one image from camera view into projector view. So " transform pose from camera to projector and then use projector intrinsic to map 3D world of projector into 2D image" is right way to implement it?
I apologize if I am asking inappropriate question.
Thanks and Best Regards

How can we see the 3D object in space rather than near the projected surface?

Hi @thundercarrot ,

I am able to see the 3D object and interact with it but it is near to the projected surface. what changes should I make so that it appear in space (lets say at chest height )rather than near the projected surface? Is it possible with one projector and one kinect (just for single user's prospective view)?

Also can you please add some description (in the Samples) about how have you rendered the 3D object, If I want to replace this cube with a ball or any other object how to go about it.

Thanks you so much for your valuable time.

Regards
Devender

ProjectionMappingSample error

When I open (and run as admin) the ProjectionMappingSample, I get an error.

Unhandled Exception: System.IndexOutOfRangeException: Index was outside the bounds of the array.
   at RoomAliveToolkit.ProjectionMappingSample..ctor(String[] args) in C:\Users\Ayan\Desktop\ProCamCalibration\ProjectionMappingSample\ProjectionMappingSample.cs:line 25
   at RoomAliveToolkit.ProjectionMappingSample.Main(String[] args) in C:\Users\Ayan\Desktop\ProCamCalibration\ProjectionMappingSample\ProjectionMappingSample.cs:line 19

Then it says ProjectionMappingSample has stopped working. It gave me the option to debug, so I tried debugging and it said this was the problem:

string path = args[0];

I tried to rebuild. What should I do next?
Thanks, Ayan.

Mapping Kinect's 3D point to Projector's 2D pixel in Unity

Hey,
I am trying to map a 3D camera space point from kinect to a projector's 2D pixel in unity. I know this has been asked here several times and I've read through all those threads but could not make it work. My calibration of 1 projector and 1 kinect is coming out pretty good and switching to projector view in CalibrateEnsemble lines up the corners pretty well. The overlayed mapping also looks good enough in the sample.

I think there are 2 ways to go about this, and I tried both. Assuming 3D point from kinect is [kx, ky, kz] which is the tracked joint position of my right hand:

  1. Using the Project() function:
    a. First convert from homogeneous camera coordinate frame(i.e a Vector4 with values [kx, ky, kz, 1]) to projector coordinate frame.
    As described in this discussion: #13,

3D point x in projector coords is A * x, where A is the projector's 4x4 pose matrix and x is the (homogeneous) point in the depth camera.

So I used a manually filled Matrix4x4(from projector pose in xml) and multiplied it with [kx, ky, kz, 1] to get [px, py, pz, 1].

b. Then convert projector's 3D point to 2D point:

x can then be projected to projector image coordinates using the projector's 3x3 intrinsics matrix.
Look for the 'Project' function in the code.

I then called Project with parameters of projectorCameraMatrix(manually filled from XML), a zero matrix(as projector lensDistortion is [0, 0]), px, py, pz, u and v.
The value of u and v I get in pixels does not map correctly to the my hand after converting it from (-width/2, width/2) and (-height/2, height/2) to (0, width) and (0, height). There is a bit of distance between the two which changes as I move around.

  1. Using GraphicsTransforms.ProjectionMatrixFromCameraMatrix:
    As discussed in: #18
    I calculated 'projector' and 'view' matrices as calculated in CalibrateEnsemble > MainForm.cs > SetViewProjectionFromProjector function. Then calculated projectorWorldViewProjection:
    projectorWorldViewProjection = world * form.view * form.projection;
    The 'world' is an identity matrix for 1 kinect, so removed it in code. The projectorWorldViewProjection I get is the exact same as calculated by ProjectionMappingSample.
    Then multiplying projectorWorldViewProjection with [kx, ky, kz, 1], I get [qx, qy, qz, qw]. Mapping this from (-1, 1) to (0, 1) for x and y, and reversing y's axis, I still get quite a bit of displacement between my hand and the point projected.

There are a few things which I have doubts about and I might be doing something wrong there:

  1. Do I need to convert the right hand matrices used by RoomAlive to left hand used by unity? I'd need the conversion if I'm placing 3D objects in the scene but converting kinect camera space 3D point to projector's 2D point only requires matrix multiplications and those should be platform independent, right?
  2. In ProjectorCameraEnsemble, the values used for kinect points[kx, ky, kz, 1] before being used for matrix multiplication are modified:
    // depth camera coords
    var depth = depthImage[x, y] / 1000f; // m
    // convert to depth camera space
    var point = depthFrameToCameraSpaceTable[Kinect2Calibration.depthImageWidth * y + x];
    depthCamera[0] = point.X * depth;
    depthCamera[1] = point.Y * depth;
    depthCamera[2] = depth;
    depthCamera[3] = 1;

Do I need to also divide kz by 1000 and multiply kx and ky with kz/1000 before multiplication with matrix before using with Project() or GraphicsTransforms.ProjectionMatrixFromCameraMatrix? I tried it out and that seemed to make the mapping even worse.

I've been trying out everything and anything I can but have come to a dead end. I know it's a long rant but would be grateful for any help...

Rendering ProjectionMappingSample on Multiple Machines

We've gotten the system to run on two machines (will be three once a PCI USB3 hub comes in) and it currently runs the Projection Mapping Sample on the host correctly. Going back through the ReadMe, it states that it does not support rendering across multiple machines unless done so in a distributed framework. Any ideas on how one would go about doing this so that we can get all three machines to render their own images or all three at once?

I cannot see the 3D Object

Hello,

I have been following the steps/instructions to compile and calibrate the demo in which at first everything seems to be working fine, but in “projectionMappingSample” I am having trouble trying to visualize the 3D object in the ProyectionMappingSamples.

The rest seems to work well but when I active the “ThreeDObjectEnabled” option, I cannot see the 3D Object. I only get a black or white screen when I position myself in front of the Kinect that we are using for the tracking.

We get a correct calibration because the wobble effect works correctly.

Once calibrated I am using the Kinect which is connected directly to a PC to track the head. The Flag “liveDepthEnabled” is de-activated.

Do you know what could be the problem and is there any fix?

Thanks a lot for your time!

-Pepe-

Planar surfaces

Hi,
First thanks for the kind sharing of this good project.

I notice the sentence in README:
The current release of the calibration tool does not address the case where the projection surface is flat (such as a bare wall). If the projection surface is flat, place some sizable objects, such as a couch, a few boxes, whatever you have handy, to create a non-flat projection surface.

I also notice there are codes in class ProCamEnsemble to check if points are in plane.

I am curious to know the reason why we should have that constrain.
If I want to make an improvement to enable plane solving, what should I do? Any suggestion?

Thanks!

Request. Step by Step Video Guide

I know a bit of people who would like to promote this as students but find it hard to work with these programs with very little exposure to programming and other such knowledge needed to set this program up. It would be a great help if someone could post a video tutorial of the basics on extracting, building, and use of this program.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.