Git Product home page Git Product logo

Comments (10)

MarekKowalski avatar MarekKowalski commented on May 25, 2024

Hi, thanks for reaching out. Due to work from home I unfortunately do not have access to a Kinect, so it's hard for me to make fixes to this branch. Having said that, last time I checked, the app worked fine. Let's try to debug this. Here are some questions:

  1. I'm not really sure what you mean by Azure Kinect SDK v2.0, the project is currently set to use Azure Kinect SDK v1.4.1, which should download via NuGet. This is also the latest version available in the Azure Kinect repo. Could you elaborate on what you meant there?
  2. This issue sounds like the client is not reading any data from the Kinect. When you open the client app do you see the Kinect's camera image in the app window?

from livescan3d.

VisionaryMind avatar VisionaryMind commented on May 25, 2024

Apologies for the confusion. I am working with multiple SDK's here and mistakenly wrote 2.0, but yes -- I am using v1.4.1 direct from the repo. Also, the client is not reading data from the Kinect at all. I see a "capture device failed to initialize".

from livescan3d.

MarekKowalski avatar MarekKowalski commented on May 25, 2024

I see, looks like bInitialized is set to false in bool AzureKinectCapture::Initialize(). It might be set in one of the following lines: 36, 54, 114

Can you try setting breakpoints in those lines and seeing which it is? Here are some possible causes depending on the lines:

  • if it's line 36 then looks like the SDK can't open the device. Did you try other Azure Kinect apps? Did they work?
  • if it's line 54 then looks like there is an issue with the Kinect's internal calibration. We'd have to see what the solutions are in this case.
  • if it's line 114 then everything else worked, but the frames are still not arriving. This might indicate some sort of an issue with how LiveScan3D initializes the Kinect.

from livescan3d.

VisionaryMind avatar VisionaryMind commented on May 25, 2024

Thank you for the additional guidance. Looking through the code, I noticed that there as no AzureKinectCapture class, and it suddenly became apparent that I had not switched from Master to AzureKinect branch. Once I made that switch, it is now able to capture and record frames. Thank you for taking the time to respond and sorry for the distraction!

from livescan3d.

VisionaryMind avatar VisionaryMind commented on May 25, 2024

I would like to ask one last question regarding point clouds. Is there a reason they are rendered upside down? Would you be able to provide a tip as to where in the code the rotation on Z axis could be turned 180 degrees? Live view also displays the point cloud upside down.

from livescan3d.

MarekKowalski avatar MarekKowalski commented on May 25, 2024

Hi, this is due to the Azure Kinect's coordinate system being different than Kinect v2. The simplest way to change it is to perform calibration using the markers in the docs section. The easiest way to do this would be:

  • go to server settings, add a marker with id 0 and set it's rotation around z axis to 180 degress.
  • print the marker, place it in a position that is seen by the Kinect and press calibrate in the server. If you have no way to print it, you can show it on your phone.

Marek

from livescan3d.

VisionaryMind avatar VisionaryMind commented on May 25, 2024

This is not specifically related to the original issue, but I want to keep it in the AzureKinect branch category. I am attempting to capture timestamps for all frames, and am storing them in an array inside the KinectServer.KinectServer GetStoredFrame method. Unfortunately, when these are streamed to file along with the PLYs, the timestamps appear to start after the recording is stopped. Is there a more appropriate place to capture a timestamp for each frame? I noticed you are storing the current time intFPSUpdateTimer inside OpenGLWindow.cs, however, I would have presumed this happens after the frame is received and GetStoredFrame is invoked. If you have a moment, please let me know what I have missed, as I would very much like to be able to move the depth camera around and know, to the microsecond, when each frame is being captured.

from livescan3d.

ChristopherRemde avatar ChristopherRemde commented on May 25, 2024

I don't know if this helps you, but in the PR #49 I changed the timestamp to be taken directly from the kinect device, rather than the PC it runs on. That could be a bit more accurate.

But please note that this timestamp is not a synchronized global time (e.g. 11:12 AM) but rather a timer which starts when the device starts it capture (e.g. 5.7192 seconds after the device start.)

from livescan3d.

VisionaryMind avatar VisionaryMind commented on May 25, 2024

But please note that this timestamp is not a synchronized global time (e.g. 11:12 AM) but rather a timer which starts when the device starts it capture (e.g. 5.7192 seconds after the device start.)

Yes, I was aware of this caveat, and therefore did not try to work with the device's time. Our workflow uses multiple capture devices (audio, video, LiDAR, depth, DSLR), and everything is being synced to LTC / SMPTE timecode. Even if I were to capture system time and then increment it by the Kinect's timer, I am quite certain it would not be in sync with any other device capturing at the same time. System time is about as close as we have, especially if the devices are on a single system.

It sounds to me that the answer here is to feed an LTC timesync into the Kinect's audio stream. Do you implement such a stream anywhere in your code? I did not see a feature to capture audio, but Kinect has 360-degree spatial capability, and it might be quite novel to use it, should multiple "clusters" of Kinects be implemented for parallel volumetric capture.

I will be happy to move this in to the feature request section, should you feel it is something worth pursuing. Most of our code is written in Python, so if you have any CPP/C# snippets of examples lying around that implements such a feature, please let me know. Thank you for your time!

from livescan3d.

MarekKowalski avatar MarekKowalski commented on May 25, 2024

The app does not read the audio stream anywhere in the code unfortunately and I don't think I have any snippets of such code laying around.
I feel that an alternative solution for you might be to synchronize the devices using an external trigger as discussed here. You could have the trigger pulse generated at a precise time, which would provide you with the frame's timestamp.

from livescan3d.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.