Git Product home page Git Product logo

depthviewer's Introduction

DepthViewer

main_image
Using MiDaS Machine Learning Model, renders 2D videos/images into 3D object with Unity for VR.

Try Now

Outdated builds (less effective 3D)

Examples

Original input (resized) Plotted (MiDaS v2.1) Projected Src
example1_orig_resized example1_plotted example1_projected #

So what is this program?

This program is essentially a depthmap plotter with an integrated depthmap inferer, with VR support.

demo_basic

The depthmaps can be cached to a file so that it can be loaded later.
demo_cache

Inputs

  • Right mouse key: hides the UI.
  • WASD: rotate the mesh.
  • Backtick `: opens the console.
  • Numpad 4/5/6: pause/skip the video.

Models

The built-in model is MiDaS v2.1 small model, which is ideal for real-time rendering.

Loading ONNX models

Tested onnx files:

From my experience dpt_hybrid_384 seems to be more robust against drawn images (i.e. non-photos)

  • Put the onnx files under the onnx directory.
  • Open this options menu and select the file and click the Load button

OnnxRuntime GPU execution providers

  • CUDA: Also requires cuDNN. Test version: CUDA v11.7 and cuDNN v8.2.4
  • For others, see here

Recording 360 VR video

If you select a depthfile and an according image/video, a sequence of .jpg file will be generated in Application.persistentDataPath.
Go to the directory, and execute

ffmpeg -framerate <FRAMERATE> -i %d.jpg <output.mp4>

Where <FRAMERATE> is the original FPS.

To add audio,

ffmpeg -i <source.mp4> -i <output.mp4> -c copy -map 1:v:0 -map 0:a:0 -shortest <output_w_audio.mp4>

Connecting to an image server

The server has to provide a jpg or png bytestring when requested. Like this program: it captures the screen and returns the jpg file. I found it to be faster than the built-in one (20fps for 1080p video).
Open the console with the backtick ` key and execute (url is for the project above, targeting the second monitor)

httpinput localhost:5000/screencaptureserver/jpg?monitor_num=2

Importing/Exporting parameters for the mesh

After loading an image or a video while the Save the output toggle is on, enter the console command

e

This saves the current parameters (Scale, ...) into the depthfile so that it can be used later.

Using ZeroMQ + Python + PyTorch/OnnxRuntime

May be unstable. Implemented after v0.8.11-beta.1.

  1. Run DEPTH/depthpy/depthmq.py. (Also see here for its dependencies, plus pyzmq is required)
  2. In the DepthViewer program, open the console and type zmq 5555.

Use python depthmq.py -h for more options such as port (default: 5555), model (default: dpt_hybrid_384) To use OnnxRuntime instead of PyTorch, add --runner ort and --ort_ep cuda or --ort_ep dml. For this onnxruntime-gpu or onnxruntime-directml is needed, respectively.

Using ZeroMQ + Python + FFmpeg + PyTorch/OnnxRuntime

Gone are the days of VP9 errors and slow GIF decoding. Implemented after v0.8.11-beta.2.

  1. Run DEPTH/depthpy/ffpymq.py. Also add --optimize for the float16 optimazation.
  2. In the DepthViewer program, open the console and type zmq_id 5556. Now all video/GIF inputs are passed to the server and fetches the image and the depth. Use zmq_id -1 to disconnect.

Tested formats:

Images

  • .jpg
  • .png

Videos

  • .mp4, ... : Some files can't be played because Unity's VideoPlayer can't open them. (e.g. VP9)

Others

  • .gif : Certain formats are not supported.
  • .pgm : Can be used as a depthmap (Needs a subsequential image input)
  • .depthviewer

Notes

  • If VR HMD is detected, it will open with OpenXR.
  • All outputs will be cached to Application.persistentDataPath (In Windows, ...\AppData\LocalLow\parkchamchi\DepthViewer).
  • Depth files this program creates are of extention .depthviewer, which is a zip file with .pgm files and a metadata file.
  • To create .depthviewer files using python, see here
  • Rendering the desktop is only supported in Windows for now.
  • C# scripts are in DEPTH/Assets/Scripts.
  • Python scripts are in DEPTH/depthpy.
  • Also see here

Todo

  • Overhaul UI & Control
  • Add more options
  • Fix codecs
  • Stablize

WIP

  • VR controllers support (See here)
  • Support for the servers that send both the image file and the depthmap

Building

The Unity Editor version used: 2021.3.10f1

ONNX Runtime dll files

  • These are added to the repo with Git LFS since v0.8.9

These dll files have to be in DEPTH/Assets/Plugins/OnnxRuntimeDlls/win-x64. They are in the nuget package files (.nupkg), get them from

Microsoft.ML.OnnxRuntime.Gpu => microsoft.ml.onnxruntime.gpu.1.13.1.nupkg/runtimes/win-x64/native/*.dll

  • onnxruntime.dll
  • onnxruntime_providers_shared.dll
  • onnxruntime_providers_cuda.dll
  • I don't think this is needed: onnxruntime_providers_tensorrt.dll

Microsoft.ML.OnnxRuntime.Managed => microsoft.ml.onnxruntime.managed.1.13.1.nupkg/lib/netstandard1.1/*.dll

  • Microsoft.ML.OnnxRuntime.dll

Misc

Libraries used

@article {Ranftl2022,
    author  = "Ren\'{e} Ranftl and Katrin Lasinger and David Hafner and Konrad Schindler and Vladlen Koltun",
    title   = "Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer",
    journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
    year    = "2022",
    volume  = "44",
    number  = "3"
}
@article{Ranftl2021,
	author    = {Ren\'{e} Ranftl and Alexey Bochkovskiy and Vladlen Koltun},
	title     = {Vision Transformers for Dense Prediction},
	journal   = {ICCV},
	year      = {2021},
}

For Python scripts only:

@misc{https://doi.org/10.48550/arxiv.2302.12288,
  doi = {10.48550/ARXIV.2302.12288},
  
  url = {https://arxiv.org/abs/2302.12288},
  
  author = {Bhat, Shariq Farooq and Birkl, Reiner and Wofk, Diana and Wonka, Peter and Müller, Matthias},
  
  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth},
  
  publisher = {arXiv},
  
  year = {2023},
  
  copyright = {arXiv.org perpetual, non-exclusive license}
}
@article{depthanything,
      title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data}, 
      author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang},
      journal={arXiv:2401.10891},
      year={2024}
}

Misc

Also check out

Remarks

2023 March 9

This project was started in September 2022 with primary goal of using monocualar depth estimation ML model for VR headsets. I could not find any existing programs that fit this need, except for a closed-source program VRin (link above). That program (then and still in Alpha 0.2) was the main inspiration for this project, but I needed more features like image inputs, other models, etc. As it was closed source, I grabbed a Unity/C# book and started to generate a mesh from script.

I gradually added features by trial-and-error rather than through planned development, which made the code a bit messy, and many parts of this program could have been better. But after a series of progressions, I found the v0.8.7 build to be okay enough for my personal use. So this project is on "indefinite hiatus" from now on, but I'm still open for minor feature requests and bug fixes.

I thank all people who gave me compliments, advices, bug reports, and criticisms.

Thank you.

Chanjin Park [email protected]

2023 March 21

I'll be still updating this project, it can be slow since the school has started again.

depthviewer's People

Contributors

parkchamchi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

depthviewer's Issues

xformers for potential speedup, or torch 2.01 arguments

I had read that using xformers (pip install xformers) could possibly results in a large speedup in the marigold and depth-anything realtime conversion. The issue is I can't find any xformers wheel that is compatible with 2.0.1+cu117 (cuda 11.7) and not sure if the unity project requires that version of cuda to work.

It seems like xformers version 22 is compatible possibly with torch 2.01 and cuda 11.8.

If this doesn't work though because it's too old: I had read an argument you can do with torch 2.01 that would be as fast of a speedup as xformers is adding --opt-sdp-attention or --opt-sdp-no-mem-attention arguments (but these seem specific flags only for automatic1111 I am wondering if the same sort of thing could be done here?)

I still can't get the depth-anything model going quite yet to test though. Somehow threedeejay did but he says it runs at 2 frames per second.

Depthviewer only generates 518x518 for DAN

Hey, I am trying to convert a bunch of images, however I've noticed that if I use the depth.py it generates much better depth. Looking at the depth files it appears that using depth.py downscales the image so that 1 side is 518 and the other is bigger, but the viewer only does 518x518. Since the viewer does not see the previously generated .depthfiles, this makes it so that for each image, the depth file needs to be opened, then the image file, every time.

Would it be possible to have a way to automatically read the generated depth files or make the viewer generate it like how depth.py does please?

Thank you.

Depth images with high dynamic range (high precision)

Hi, great software!

I noticed that all depth data in files is only 8 bits per channel. Would it be possible to optionally use higher precision for export? PNG, for example, can store 32 bits per color. TIFF or OpenEXR could also be used. Gimp unfortunately doesn't support high dynamic range as far as I know, but Photoshop or Blender will do. 8-bit can introduce artifacts when used as a displacement map for rendering.

Best regards!

Error Issues for opening and setting up the project file.

Hello Sir or Madam.

I am a beginner developer new to using MiDas.

I found your marvelous work during my current research.

But, I also found some missing files(suchas Generated, TextMesh Pro, UserSettings)
Missing_Files_001

and there some problems when unity reads dll files.
Errors_001

I am wondering if you can guide me how to set your the most current project file in unity properly.

That will be very helpful.

Thank you for reading.

Kind Regards
jhxr

Support for other GPUs (Intel Arc, AMD)

Hiya, I used to use your software initially through github then through steam on my GTX 1060, but I've since updated to an Intel Arc A770 and now when loading any model on your application it goes directly to CPU inference.

Is it possible you could make it compatible for usage with other GPUs? I noticed the barracuda model that's build in runs on my GPU but everything else does not.

Caching not sequential

I'm not sure if I'm missing an option but the player skips many frames per run as opposed to converting each frame in a row, making it so the video has to be replayed multiple times just to have it fully cached. Any way to make it convert each frame in sequence for depthanything please?

Thank you.

Onnxruntime errors on startup of Unity Project

I noticed the onnxruntime .dll files show 3 errors by default when you open this project in Unity, and are not set to "load on startup" so I selected all the onnxruntime dll files under \DEPTH\Assets\Plugins\OnnxRuntimeDlls\win-x64 and set load on startup.

They try to load now but there are 4 new errors I highlighted for all of them about "expected x64 architecture, but was Unknown architecture. You must recompile your plugin for x64 architecture." I am using 64 bit Unity version 2021.3.10f1 I have tried on two separate systems and same errors.

error

I tried to change the any platform and platform settings for the .dll but it makes no difference.

[Feature request] MKV container support

MKV files (commonly used in movie rips) can't be opened (hidden in the open file dialog) so it'd be nice to be able to load my 2D movie rips to watch them in 3D 👌
DepthViewer_9zkDsUQg69

depth.py in Windows (CR LF) format (affects depth-anything script from working)

I created a new issue because I found a problem, the depth.py from the beta6 release shows Windows (CR LF) format, while the one in the working beta6 build for Depth-anything from Threedeejay shows Unix (LF) format (check bottom right of Notepad++ images). The scripts are identical besides that. (I think the number of lines difference reported is from a big square block being created throwing it off which you'll see) This does not affect Marigold in the script, but affects Depth-anything.

threedeejay-beta6
release-build

When I compare side by side something is happening at line 611 when it is in the default Windows CR LF format from the .zip files beta release. This small difference in format makes the script not work.
compare

It was downloaded through firefox and I thought maybe it's changing the contents in the zip files somehow, but I also tried curl -L -o DepthViewer-v0.9.1-beta.6-win64.7z https://github.com/parkchamchi/DepthViewer/releases/download/v0.9.1-beta.6/DepthViewer-v0.9.1-beta.6-win64.7z and was same results I also thought maybe winrar is changing it, but I tried 7zip and it's still Windows (CR LF).

I also tried:

  1. Open the file in Notepad++.
  2. Go to Edit > EOL Conversion > UNIX/OSX Format.
  3. Save the file.

But it seems like that doesn't work either because it's still showing a line difference and seems to have corrupted the file metadata already, so simply converting doesn't work. Not sure how threedeejay got a working copy in Unix (LF) format by default. Here are the two scripts if anyone can figure this out. The only fix is if I overwrite the depth.py in the release beta 6 with the unix (lf) format depth.py.

depth.py directly extracted from DepthViewer-v0.9.1-beta.6-win64.7z
depth.zip

depth.py threedog beta 6 version he extracted from DepthViewer-v0.9.1-beta.6-win64.7z (I believe)
depththreedeejay-beta6.zip

Edit: Threedeejay said he over overwrote with this unreleased version he thinks, https://codeload.github.com/parkchamchi/DepthViewer/zip/refs/heads/master I downloaded it and the depth.py is indeed in Linux LF format there, so it works. (cloning report shows Windows CR LF format.)

[Feat] Geowizard implementation

There is a new model released this week that is supposed to be like marigold, but much faster. Could be useful for Depthviewer, would love to see implemented natively. https://github.com/fuxiao0719/GeoWizard

Demo link: https://huggingface.co/spaces/lemonaddie/geowizard

Also here is a link for running it locally through comfyui, to use on either real images or AI generated ones: https://github.com/kijai/ComfyUI-Geowizard Not sure if some of the reference code could help.

Errors when clone repo (not just lfs storage past quota problem)

I have noticed whenever I try to clone the repo I run into errors in the Unity Project on startup. I actually fixed them in an older local build here, but forgot exactly what I did and hoping to find some clarification.

For the lfs storage being past quote (and not downloading the onnxruntime files fully) I just copy and paste the onnxruntime .dlls from my working project for now. But when I start it up I am getting plasticSCM, lib_burst_generated.dll being copied twice errors (deleting duplicates of lib_burst_generated.dll, deleting library and temp folders, did not resolve)

I think I had to turn off enable burst compilation in project settings > burst aot settings, but not sure if there's a consequence to that. The plasticSCM error I also just got rid of again somehow (I think I may have updated in the package manager but not sure) but these errors are present each time I clone repo. Any ideas?

[Feature Request] 3D sbs video depthmap support (move left and right meshes/depthmaps to same position in Unity coordinate space for enhanced 3D? Using only left eye texture for both meshes)

while I was using this app I was thinking about how the monoscopic to VR video conversion works so well in this even with just Midas, Beit and Marigold being mindblowing. follow this post

If we were to add 3D side-by-side VR180 video support with the realtime depthmaps (and marigold if if ever magically gets optimized for realtime) would this allow for VR180 videos with better 3D and postitional tracking?

I just loaded a VR video from my VR180 camera and the left and right side appear and has depthmaps on both sides, it just needs to be combined into one image somehow. This could make for a much more comfortable and immersive viewing experience due to the 3D being better from the side by side image having more views.

I've realized this app actually does have 6dof positional tracking for the headset, the problem was the objects are so large and far away that it feels like 3dof, (Edit: the VR camera is very far away by default) Is there no way to physically move forward, backwards in the space?

[Requests/Info] couple requests and questions

Hi,
Thanks for this lovely app! Now i dont have to use blender anymore xD
i would have a couple ideas/requests for you:

1 – Autoload different models depending on Mediatype (e.g. built-in for Video/Gif, BEiT_L512 for Images)
1.1 – keep both Models loaded if possible (idk if thats even a thing, im not a programmer sorry 🙈)

2 – Automatic motion (like in Depthy where you can set the distance and radius)

3 – Support for Equirectangular images for 360°-view

4 – Depthmap preview (grayscale/inferno)

5 – High/Low Clipping of depth (for example when background/foreground seperation is too extreme)

Kind regards,
Cirrus

PS: Is there any explanation what the sliders actually do? i only understand ProjRatio, ScaleR and DepthMultRL... 😅

[Feature Request] implementing new Python Library for Auto1111/Stable diffusion (Unity GUI)

There is a new python library/sdk that released that could potentially allow for implementing AI image generation and prompting into depthviewer (with some added GUI elements and code added to Unity of course) https://github.com/saketh12/Auto1111SDK

The dev said on reddit they plan on adding animatediff support for the AI video clips, and bringing automatic1111 extensions. This would mean there would be no need to install automatic1111 separately or using the automatic1111 --api flag

Implementing this in depthviewer could potentially allow for generating AI content from within the app, and then have it convert with depth-anything or marigold immediately, could be pretty mindblowing. Perhaps they could even add stable diffusion video support in the future.

I opened an issue over there to ask the specifics on if it would be possible. Just putting this feature request in, I feel it would really enhance the capabilities of the app. (If it's feasable with what they released)

[Feature Request] Buying Proper VR headset for dev here (Mock HMD XR plugin??)

Edit: Issue closed, I must have turned it on by mistake because when I start a new project it's Mock HMD is not on by default.

I think I have just realized why this app is very lacking in proper VR support with motion controls and positional tracking. is it because it's using the The "Mock HMD XR Plugin for Unity" by default for which is for people building without a device.

I would highly recommend dev doesn't have a headset buying at least a Quest 2 as bare minimum, which can be had for about $150 on used marketplace at the moment, I'd even be willing to donate some money to get you one to get proper VR support in this app. The Quest 3 just came out but is more expensive and a lot better but if money is an issue let me know.

This app has a lot of potential right now.

New versions won't work in desktop mode for me (not sure what changed on my system)

Here is a video of the issue I am encountering. Hard hangs on startup for build releases, it also does this in VR mode. Only the old version at the end of video works.

Any ideas? The old 0.91 used to work fine last week, I imagine 0.10 works for everyone also, the only things that have changed on my system is meta v62 update but I don't see how this would affect desktop mode like this?

Recording.2024-02-04.225227.1.mp4

Error with "browse files" button in new Unity version 2022.3.18f1 beta project.

Very happy with the update as this allows me to implement new depth-api support from meta for hand occlusion. (similar to how apple vision pro does it with hands to blocking virtual objects)

The only issues I have is in play mode with the editor if I click browse button in the GUI I I have this error now:

NullReferenceException: Object reference not set to an instance of an object
MainBehavior.BrowseFiles () (at Assets/Scripts/MainBehavior.cs:598)
UnityEngine.Events.InvokableCall.Invoke () (at <f7237cf7abef49bfbb552d7eb076e422>:0)
UnityEngine.Events.UnityEvent.Invoke () (at <f7237cf7abef49bfbb552d7eb076e422>:0)
UnityEngine.UI.Button.Press () (at ./Library/PackageCache/[email protected]/Runtime/UI/Core/Button.cs:70)
UnityEngine.UI.Button.OnPointerClick (UnityEngine.EventSystems.PointerEventData eventData) (at ./Library/PackageCache/[email protected]/Runtime/UI/Core/Button.cs:114)
UnityEngine.EventSystems.ExecuteEvents.Execute (UnityEngine.EventSystems.IPointerClickHandler handler, UnityEngine.EventSystems.BaseEventData eventData) (at ./Library/PackageCache/[email protected]/Runtime/EventSystem/ExecuteEvents.cs:57)
UnityEngine.EventSystems.ExecuteEvents.Execute[T] (UnityEngine.GameObject target, UnityEngine.EventSystems.BaseEventData eventData, UnityEngine.EventSystems.ExecuteEvents+EventFunction`1[T1] functor) (at ./Library/PackageCache/[email protected]/Runtime/EventSystem/ExecuteEvents.cs:272)
UnityEngine.EventSystems.EventSystem:Update() (at ./Library/PackageCache/[email protected]/Runtime/EventSystem/EventSystem.cs:530)

Not too sure how to fix this.

Sentis from Barracuda migration

Unity released the Sentis package, which seems to a rebranding of the Barracuda. Plenty of the API have been changed, and I made the sentis branch, with the init commit 897dc08.

The first error comes from the Texture-to-Tensor conversion, as here

//using (var tensor = new Tensor(source, 3)) {
using (var tensor = TextureConverter.ToTensor(source, channels:3)) {

The code below should replace the older code.

But when the TextureConverter.ToTensor(source); is executed,

NullReferenceException: Object reference not set to an instance of an object
Unity.Sentis.ComputeShaderSingleton.RegisterKernels (System.String shaderName, System.String[] kernelNames) (at Library/PackageCache/[email protected]/Runtime/Core/Backends/GPUCompute/ComputeShaderSingleton.cs:175)
Unity.Sentis.ComputeShaderSingleton.RegisterGeneratedKernels () (at Library/PackageCache/[email protected]/Runtime/Core/Backends/GPUCompute/ComputeShaderSingleton.gen.cs:9)
Unity.Sentis.ComputeShaderSingleton..ctor () (at Library/PackageCache/[email protected]/Runtime/Core/Backends/GPUCompute/ComputeShaderSingleton.cs:38)
Unity.Sentis.ComputeShaderSingleton..cctor () (at Library/PackageCache/[email protected]/Runtime/Core/Backends/GPUCompute/ComputeShaderSingleton.cs:20)
Rethrow as TypeInitializationException: The type initializer for 'Unity.Sentis.ComputeShaderSingleton' threw an exception.
Unity.Sentis.ComputeFunc..ctor (System.String kn) (at Library/PackageCache/[email protected]/Runtime/Core/Backends/GPUCompute/ComputeFunc.cs:24)
Unity.Sentis.ComputeFuncSingleton.Get (System.String name) (at Library/PackageCache/[email protected]/Runtime/Core/Backends/GPUCompute/ComputeFuncSingleton.cs:16)
Unity.Sentis.TextureConverter.ToTensor (UnityEngine.Texture texture, Unity.Sentis.TextureTransform transform) (at Library/PackageCache/[email protected]/Runtime/Core/Converters/TextureConverter.cs:52)
Unity.Sentis.TextureConverter.ToTensor (UnityEngine.Texture texture, System.Int32 width, System.Int32 height, System.Int32 channels) (at Library/PackageCache/[email protected]/Runtime/Core/Converters/TextureConverter.cs:25)

A null exception arises in Sentis backend

void RegisterKernels(string shaderName, string[] kernelNames)
{
    foreach (var kernelName in kernelNames)
    {
        m_KernelToShaderName[kernelName] = shaderName;

        var shader = FindComputeShader(kernelName); //line 174
        var kernelIndex = shader.FindKernel(kernelName); //here, line 175

Since it's a null exception, I guess it's FindComputeShader() of line 174 that is faulty.

GPU acceleration/performance improvement?

Hi, I'm getting really nice results with the large models, but performance is terrible (<1FPS)
I'm currently running a RTX 2080 Ti that's barely getting any use. would GPU acceleration get significant performance boost?
If so, could you please be more specific as to how to set up DepthViewer with CUDA/cuDNN? There's so many options I'm not sure what/how to install them exactly, or where to get it from.

Depth Anything support

As mentioned in #9 (comment), there's this new model that's been making the rounds and it's indeed comparable to BEiT in terms of detection accuracy, though I'm curious about performance, so I just tried loading the third-party ONNX version via the Unity app, but I'm getting the following errors:

This part repeats for infinitely/every frame when attempting to play a video file. happens with all models, whether GPU/CUDA is enabled or not

RenderTexture.Create failed: width & height must be larger than 0
Texture has out of range width / height
UnityException: Failed to create texture because of invalid parameters.
UnityEngine.Texture2D.Internal_Create (UnityEngine.Texture2D mono, System.Int32 w, System.Int32 h, System.Int32 mipCount, UnityEngine.Experimental.Rendering.GraphicsFormat format, UnityEngine.Experimental.Rendering.TextureCreationFlags flags, System.IntPtr nativeTex) (at <e036a72b7a734c938bd0e3bb10424e1b>:0)
UnityEngine.Texture2D..ctor (System.Int32 width, System.Int32 height, UnityEngine.TextureFormat textureFormat, System.Int32 mipCount, System.Boolean linear, System.IntPtr nativeTex) (at <e036a72b7a734c938bd0e3bb10424e1b>:0)
UnityEngine.Texture2D..ctor (System.Int32 width, System.Int32 height) (at <e036a72b7a734c938bd0e3bb10424e1b>:0)
OnnxRuntimeDepthModel.Run (UnityEngine.Texture inputTexture) (at <c22d50f66c014b868279a538562184fd>:0)
ImgVidDepthTexInputs.UpdateVid () (at <c22d50f66c014b868279a538562184fd>:0)
ImgVidDepthTexInputs.UpdateTex () (at <c22d50f66c014b868279a538562184fd>:0)
MainBehavior.Update () (at <c22d50f66c014b868279a538562184fd>:0)

The error is different when loading images though:

RenderTexture.Create failed: width & height must be larger than 0
Texture has out of range width / height
UnityException: Failed to create texture because of invalid parameters.
UnityEngine.Texture2D.Internal_Create (UnityEngine.Texture2D mono, System.Int32 w, System.Int32 h, System.Int32 mipCount, UnityEngine.Experimental.Rendering.GraphicsFormat format, UnityEngine.Experimental.Rendering.TextureCreationFlags flags, System.IntPtr nativeTex) (at <e036a72b7a734c938bd0e3bb10424e1b>:0)
UnityEngine.Texture2D..ctor (System.Int32 width, System.Int32 height, UnityEngine.TextureFormat textureFormat, System.Int32 mipCount, System.Boolean linear, System.IntPtr nativeTex) (at <e036a72b7a734c938bd0e3bb10424e1b>:0)
UnityEngine.Texture2D..ctor (System.Int32 width, System.Int32 height) (at <e036a72b7a734c938bd0e3bb10424e1b>:0)
OnnxRuntimeDepthModel.Run (UnityEngine.Texture inputTexture) (at <c22d50f66c014b868279a538562184fd>:0)
ImgVidDepthTexInputs.FromImage (UnityEngine.Texture texture) (at <c22d50f66c014b868279a538562184fd>:0)
ImgVidDepthTexInputs.FromImage (System.String filepath) (at <c22d50f66c014b868279a538562184fd>:0)
ImgVidDepthTexInputs..ctor (FileTypes ftype, IDepthMesh dmesh, DepthModel dmodel, System.String filepath, System.Boolean searchCache, System.Boolean canUpdateArchive, UnityEngine.Video.VideoPlayer vp, IVRRecord vrrecord, AsyncDepthModel asyncDmodel, System.Boolean forceStopNotPauseOnLoopPoints) (at <c22d50f66c014b868279a538562184fd>:0)
MainBehavior.SelectFile (System.String filepath) (at <c22d50f66c014b868279a538562184fd>:0)
MainBehavior.<BrowseFiles>b__60_0 (System.String path) (at <c22d50f66c014b868279a538562184fd>:0)
StandaloneFileSelecter.SelectFile (OnPathSelected callback) (at <c22d50f66c014b868279a538562184fd>:0)
MainBehavior.BrowseFiles () (at <c22d50f66c014b868279a538562184fd>:0)
UnityEngine.Events.InvokableCall.Invoke () (at <e036a72b7a734c938bd0e3bb10424e1b>:0)
UnityEngine.Events.UnityEvent.Invoke () (at <e036a72b7a734c938bd0e3bb10424e1b>:0)
UnityEngine.UI.Button.Press () (at <f96b039c628241f18868828ba427ed81>:0)
UnityEngine.UI.Button.OnPointerClick (UnityEngine.EventSystems.PointerEventData eventData) (at <f96b039c628241f18868828ba427ed81>:0)
UnityEngine.EventSystems.ExecuteEvents.Execute (UnityEngine.EventSystems.IPointerClickHandler handler, UnityEngine.EventSystems.BaseEventData eventData) (at <f96b039c628241f18868828ba427ed81>:0)
UnityEngine.EventSystems.ExecuteEvents.Execute[T] (UnityEngine.GameObject target, UnityEngine.EventSystems.BaseEventData eventData, UnityEngine.EventSystems.ExecuteEvents+EventFunction`1[T1] functor) (at <f96b039c628241f18868828ba427ed81>:0)
UnityEngine.EventSystems.EventSystem:Update()```

slideshow?

hello there :)
can you implement a simple slideshow when opening a folder?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.