ydrive / easysynth Goto Github PK
View Code? Open in Web Editor NEWUnreal Engine plugin for easy creation of synthetic image datasets
License: MIT License
Unreal Engine plugin for easy creation of synthetic image datasets
License: MIT License
Any plans to support the UE5?
I would love to help with this, but I'm just starting with the UE altogether.
Enable coloring of:
It would be very helpful if the plugin was also exposed through a python api.
That would help automate data generation via scripting, which could reduce errors etc.
Thank you for your work. This plugin is very nice to use and it works in most scenarios.
Try to use it to get the depth of the digital human in the official MetaHuman project. But it seems that the depth at the hair is wrong. The depth value is between the background depth and the foreground depth, especially at the edges of the hair, or at the individual strands of hair.
This seems to be caused by a feature of Groom itself. I don't know if there is a way to fix it.
The maximum depth is set at 30m, the resolution 1080p, and the other settings remain the same as in the tutorial.
Hi,
I am wondering if there are ways to create camera trajectories randomly so one can generate large amount of data efficiently rather than manually creating level sequences?
I tried to find some way to creating level sequences randomly but did not have much luck. Any pointers would be really great. I also found this blog post. Section "Automatic navigation and screenshots" talks about how to automatic navigate through the scene. I am wondering if that would be a good way to go and if it will work with EasySynth?
Describe the bug
Whenever I try to render a Sequence in one of the City Sample maps i get a:
[2024.07.04-06.14.33:000][611]LogEasySynth: UTextureStyleManager::OnLevelActorAdded: Adding actor 'BP_CrowdCharacter_C_15'
[2024.07.04-06.14.33:000][611]LogWindows: Error: appError called: Assertion failed: (Index >= 0) & (Index < ArrayNum) [File:D:\build++UE5\Sync\Engine\Source\Runtime\Core\Public\Containers\Array.h] [Line: 771]
Array index out of bounds: 1 from an array of size 1
UnrealEditor_Renderer
UnrealEditor_Engine
UnrealEditor
UnrealEditor
UnrealEditor_Core
UnrealEditor_Core
UnrealEditor_RenderCore
UnrealEditor_RenderCore
UnrealEditor_Core
UnrealEditor_Core
kernel32
ntdll
or:
Assertion failed: Index>=0 && Index<NumBits [File:D:\build++UE5\Sync\Engine\Source\Runtime\Core\Public\Containers\BitArray.h] [Line: 1410]
UnrealEditor_Renderer
UnrealEditor_Engine
UnrealEditor
UnrealEditor
UnrealEditor_Core
UnrealEditor_Core
UnrealEditor_RenderCore
UnrealEditor_RenderCore
UnrealEditor_Core
UnrealEditor_Core
kernel32
ntdll
I think it might have to do with MASS? Did anyone test this before?
To Reproduce
Steps to reproduce the behavior:
Download CitySample 5.3
Create Level Sequence
Run EasySynth
Crash happens when a render Style has finished e.g. Color/Depth/etc...
Expected behavior
I expect the renders to complete without any hiccups.
Screenshots
If applicable, add screenshots to help explain your problem.
Configuration (if applicable):
Additional context
I will test if the same error occurs in 5.2
The error is very obscure and I dont see where it could originate from. If I do a normal MovieRender it doesnt crash however. Also using the same sequence in the empty level in the citysample doesnt crash. It appears that the crash happens right after a render cycle is finished and it wants to go to the next. It needs to reset the actor locations maybe and do things are not supposed to work? Any help would be appreciated.
Describe the bug
Instead of depth and normal images, UE renders RGB images.
To Reproduce
No consistent repro available. It happens somewhat randomly without a clear pattern.
Expected behavior
RGB, depth and normal images rendered.
Configuration (if applicable):
I wanna know if this plugin works well with EXR format.
Hi Nikola!
Do you have any update on implementing post process material?
I see that EasySynth is using the default render settings from UE (UMovieRenderPipelineProjectSettings):
After a render with EasySynth, if I open the Movie Render Queue, I can see that in the "settings" tab that a temp file called "MoviePipelineMasterConfig_0" was created.
This temporary setting file seems to be the default settings file used by EasySynth to render.
Would it be possible for EasySynth to use a custom setting file instead of the default one?
A custom settings file in which we could turn on the Anti-aliasing and add any custom post process material? Or maybe overwrite the default one with custom settings...?
That would be awesome!
Thanks!
Describe the bug
Hi, I'm trying to insall the EasySynth plugin under Windows / UE5, but when building the engine with the plugin, I get following error:
'UMoviePipelineExecutorBase *UMoviePipelineQueueSubsystem::RenderQueueWithExecutor(TSubclassOf<UMoviePipelineExecutorBase>)': cannot convert argument 1 from 'const FSoftClassPath' to 'TSubclassOf<UMoviePipelineExecutorBase>' UE5 xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\SequenceRenderer.cpp 406
If I remove EasySynth from my Plugins folder, the build runs without issues. Did you test the Plugin under Windows in UE5 yet? Is there something special I need to consider? Thanks!
Here's the full output:
1>[3/5] Compile Module.EasySynth.cpp 1>xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\SequenceRenderer.cpp(406): error C2664: 'UMoviePipelineExecutorBase *UMoviePipelineQueueSubsystem::RenderQueueWithExecutor(TSubclassOf<UMoviePipelineExecutorBase>)': cannot convert argument 1 from 'const FSoftClassPath' to 'TSubclassOf<UMoviePipelineExecutorBase>' 1>xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\SequenceRenderer.cpp(406): note: No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called 1>xxx\UnrealEngine\Engine\Plugins\MovieScene\MovieRenderPipeline\Source\MovieRenderPipelineEditor\Public\MoviePipelineQueueSubsystem.h(45): note: see declaration of 'UMoviePipelineQueueSubsystem::RenderQueueWithExecutor' 1>xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\Widgets\SemanticClassesWidgetManager.cpp(73): warning C4996: 'SColorBlock::FArguments::IgnoreAlpha': IgnoreAlpha is deprecated. Set AlphaDisplayMode to EColorBlockAlphaDisplayMode::Ignore instead Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile. 1>xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\Widgets\SemanticClassesWidgetManager.cpp(282): warning C4996: 'SColorBlock::FArguments::IgnoreAlpha': IgnoreAlpha is deprecated. Set AlphaDisplayMode to EColorBlockAlphaDisplayMode::Ignore instead Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile. 1>xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\Widgets\WidgetManager.cpp(380): warning C4996: 'FAssetData::ObjectPath': FName asset paths have been deprecated. Use GetSoftObjectPath to get the path this asset will use in memory when loaded or GetObjectPathString() if you were just doing ObjectPath.ToString() Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
I can build UE5 with the EasySynth plugin without an error.
Configuration (if applicable):
UE's movie renders queue allows users to specify orders and render different level sequences one by one as shown by the image below. I am wondering if a similar functionality can be added for EasySyn or if it can be done already.
For one user case, I encountered, I sample camera poses with different ranges for different level sequences and augment the scene differently. It would be helpful to schedule the sequences so no intervention is needed during the rendering process.
I use this code to covert optical flow data.But it doesn't look right.
Here is my python code:
import cv2
import torch
import torch.nn as nn
import numpy as np
def img2flow(img,scale):
'''
:param img: cv img bgr
:param scale: optical scale
:return: dx dy tensor
'''
h = img.shape[0]
w = img.shape[1]
img = cv2.cvtColor(img,cv2.COLOR_BGR2HSV) #h is angle ,s is intensity
ang = img[:,:,0] #* 2 *np.pi / 180
mag = img[:,:,1] / scale
dx ,dy = cv2.polarToCart(mag,ang,angleInDegrees=True)
dx = w * dx
dy = h *dy
dx = torch.Tensor(dx).unsqueeze(0).unsqueeze(0)
dy = torch.Tensor(dy).unsqueeze(0).unsqueeze(0)
#print(dx.shape)
print(dx)
print(dx.max())
return dx ,dy
def read_img2tensor(path):
img = cv2.imread(path)
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
img = img.transpose(2, 0, 1)
tensor = torch.Tensor(img).float().unsqueeze(0)
#1*3*h*w
return tensor
def vis_tensor(winname,tensor):
#tensor 1 3 h w
mat = tensor.squeeze().detach().numpy()
mat = np.uint8(mat) # float32-->uint8
mat = mat.transpose(1, 2, 0) # mat_shape: (982, 814,3)
mat = cv2.cvtColor(mat,cv2.COLOR_BGR2RGB)
#cv2.imshow(winname,mat)
cv2.imwrite(r'E:\data\EasySynth_test2/'+winname+'.jpg',mat)
return 0
def warp(x, flo):
"""
warp an image/tensor (im2) back to im1, according to the optical flow
x: [B, C, H, W] (im2)
flo: [B, 2, H, W] flow
"""
B, C, H, W = x.size()
# mesh grid
xx = torch.arange(0, W).view(1, -1).repeat(H, 1)
yy = torch.arange(0, H).view(-1, 1).repeat(1, W)
xx = xx.view(1, 1, H, W).repeat(B, 1, 1, 1)
yy = yy.view(1, 1, H, W).repeat(B, 1, 1, 1)
grid = torch.cat((xx, yy), 1).float()
# if x.is_cuda:
# grid = grid.cuda()
vgrid = grid + flo
# scale grid to [-1,1]
vgrid[:, 0, :, :] = 2.0 * vgrid[:, 0, :, :].clone() / max(W - 1, 1) - 1.0
vgrid[:, 1, :, :] = 2.0 * vgrid[:, 1, :, :].clone() / max(H - 1, 1) - 1.0
vgrid = vgrid.permute(0, 2, 3, 1)
output = nn.functional.grid_sample(x, vgrid,mode="bilinear",align_corners=False)
mask = torch.autograd.Variable(torch.ones(x.size()))
mask = nn.functional.grid_sample(mask, vgrid)
# if W==128:
# np.save('mask.npy', mask.cpu().data.numpy())
# np.save('warp.npy', output.cpu().data.numpy())
mask[mask < 0.9999] = 0
mask[mask > 0] = 1
return output * mask
if __name__ == '__main__':
#'exr'
path0 = r'E:\data\EasySynth_test\ColorImage/'+'testSeq.0008.jpeg'
path1 = r'E:\data\EasySynth_test\ColorImage/' + 'testSeq.0007.jpeg'
#flow_path = r'E:\data\EasySynth_test\OpticalFlowImage_scale2/' + 'testSeq.0008.exr'
flow_path = r'E:\data\EasySynth_test\OpticalFlowImage_scale1/' + 'testSeq.0008.exr'
flow_img = cv2.imread(flow_path, cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH)
#flow_img = cv2.imread(flow_path)
print(flow_img.max())
dx , dy = img2flow(flow_img,scale=1)
flow = torch.cat((dx,dy),dim=1)
#test
#read img0 and img1
img0 = read_img2tensor(path0)
img1 = read_img2tensor(path1)
img1_0 =warp(img1,flow)
vis_tensor('0',img0)
vis_tensor('0warp',img1_0)
testSeq.0007.jpeg:
testSeq.0008.jpeg:
And after warp 7 to 8 with testSeq.0008.exr looks like this:
I also tried other ways of warp. such from 0008.jpeg warp to 0007.jpeg.
Is there something wrong?
Besides ,when I use .jpeg format to get optical flow data with replace code cv2.imread,the value will be far out of range.And in this case scale doesn't seem to be used correctly in my code.
Hi, Thanks for the great work.
I am new to unreal so my questions may not be making sense and thanks for the patience in advance.
Hello, thanks for sharing the EasySynth plugin.
I was wondering if there is any way to get real-time skeleton tree data.
For example, the X, Y, Z position/rotation of the spine start-end, head center etc.
I want to prepare skeleton data based on ST-GCN to experiment (test/learn).
Please help me.
Thank you.
Hi Nikola!
I'm still using your tool everyday and it's a huge help for me! Thank you so much!
Would it be possible to get the option to select my own post process material in the EasySynth UI?
something like that:
So far, I've added my custom PP material to the EasySynthMoviePipelineConfig file. But it requires to restart Unreal to load the update, and only then EasySynth will render the new PP material but with each "targets" selected (i.e: if rendering Color images and Depth images, my custom PP material will be rendered 2 times, with the Color images and with the Depth images).
It's not the most friendly workflow, but it works. So, I'm wondering if this feature could be added to the plugin if it's not too much work?
Thank you!
Noel.
Describe the bug
Rendered image order towards the end of the sequences doesn't match camera motion. E.g. frame 100 appears to be behind frame 99, even though the camera is moving forward.
However, poses reported in CSV output seem to be in the correct order - suggesting this could be a race condition in the rendering queue.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Images should appear to be following a single smooth motion.
Screenshots
If applicable, add screenshots to help explain your problem.
Configuration (if applicable):
Additional context
Add any other context about the problem here.
Hi! I would like to know if there’s the possibility to render multiple sequences automatically, instead of going one by one as it would save a lot of time. I’ve tried to create a python script but it’s impossible to run the plug in with it. Thanks!
Hi,
Can you please confirm if EasySynth can generate optical flow for 3D Niagara smoke particles in UE5? If not, is there any workaround? I am trying to generate an optical flow simulated dataset of smoke plumes from a drone's perspective in a forest environment.
hi, I find the optical flow is incorrect when camera move in dy without dx.
in this example optical is correct , camera forward or backward . But move up or down is incorrect.
thank you , looking forward your reply.
Describe the bug
Depth map rendering causes a crash in UE5
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Color and depth images rendered.
Screenshots
If applicable, add screenshots to help explain your problem.
Configuration (if applicable):
Additional context
Crash reports with stack traces attached.
Hi @NikolaJov96,
Sorry if it's a stupid question, but is there a way to resize the semantic segmentation window? I can't find a way to resize it or to get a scroll bar.
Thanks!
Hello! Thank you for developing and supporting such handy tool. I have tried to use it with multi camera rig setup but meet some difficulties that looks like bug. May you please take a look or comment on this issue? Thanks!
Describe the bug
When setup empty actor with multiple cameras different output images are expected. Got the same image for every camera from rig.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Every camera generates image from corresponding view point as well as metadata.
Images are the same but metadata matches each camera.
Additional context
When debugged RigCamera[0] changes its RelativeTransform after RigCameras[0]->SetRelativeTransform(OriginalCameraTransform); but images are from the same PoV
Also exactly before &USequenceRenderer::StartRendering camera has changed RelativeLocation and next FRendererTarget::GetCameras() call it already have default location. If FoV changed, it remains the same.
When cameras have different FoV images with different FoV rendered but PoV still the same.
When rendered with Movie Render Queue and selected Camera->Render All Cameras set of images with different PoVs are generated.
Env
UE5.2.1, ToT
UE 4.27.2, v1.2.0
Optical flow output is only correct if no actors are moving in the scene during rendering.
Unfortunately Unreal Engine (currently) does not provide functionality that would enable generating such optical flow directly from the engine (especially without modifying the engine source).
Hi Nikola!
I just start using your plugin and it is amazing!
I have big issues with anti-aliasing (see attached image). Switching to different type of AA didn't help.
Do you have any suggestion to get a better AA? (like adding the AA option in the Movie Render Queue settings) ?
Thanks!
Noel.
UE 5.0, 5.1
Windows 11 pro
RTX 4090
Describe the bug
I only tested this in UE5. I'm not sure whether this is a bug or I'm doing something wrong, but for some colors, the Plugin will produce erroneous colors in the semantic map and consequently, pixels belonging to the same class will have color values different by one digit.
For example, I have a class to which I assign the RGB color (1.0, 0.502886, 0.0) using the Color Picker in UE. According to the SemanticClasses.csv
, this class gets mapped to (255, 188, 0). However, in the final image, some pixels within the class are incorrectly assigned the value (255, 189, 0). This only happens for some color values.
To Reproduce
Steps to reproduce the behavior:
ue5
branch from sourceExpected behavior
All pixels belonging to one semantic class have the exact same
Screenshots
Color in Unreal:
Color after opening & displaying image using Python:
Configuration (if applicable):
Figure out how the LOCTEXT works and apply it properly
Is it possible to get 2d or 3d bounding box of the actors(e.g. vehicles, pedenstrains) in each frame?
A user might want to add a post-process material to the camera used inside the level sequence to achieve a visual effect or introduce distortions. Do not clear that material when managing renderer target post-process materials.
Is there a way to automatically annotate our synthesized images? I understand we can render semantically segmented images, but those images in the output are simply just colored, rather than labeled. Is there a way we can generate some sort of annotation for the dataset we synthesize?
After exploring other options, I really like the implementation provided here.
However, the project seems constrained to sequence based (video like) workflows. Is there anyway to expose some of the internals for programmatic use?
Having simple library calls to provide a camera and trigger the capture of a single frame would open up a lot of use cases outside of sequence. For example, I'm looking to programmatically reset/perturb the same scene countless times over a range of parameters and capture an individual frame for each "run".
Describe the bug
Normal axes are represented by RGB image channels. Currently, each axis contains an absolute value, making the normals seldom useful.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Normal axis values are mapped from [-1, 1] range to [0, 1] using (x + 1) / 2
formula.
Hi! The Plugin will be updated to UE 5.3?? Thanks!
Creating a stereo camera ground truth rig is difficult with cameras within blueprints or cine camera actors attached to other actors (see image below). In my workflow, I have attempted to use take recorder to record a vehicle blueprint with attached camera component and as well as using a level sequence and attaching a camera to a primary camera actor which I navigate in the level sequence. For the attached camera or camera within the blueprint, the camera poses can not be exported (EasySynt. The primary camera's camera pose in the example below can be exported.
Hi,
I found in the "Depth Images" under "Outputs' structure details", it stated, "Depth values are scaled between 0 and the specified Depth range value". I have the following questions:
Describe the bug
png output are empty for all of types of renders.
To Reproduce
Steps to reproduce the behavior:
Configuration (if applicable):
Either expand the README or add a button, whichever offers the best solution.
Hi,
I tried to export a scene mesh but I found it not aligned with the camera poses. Do anyone know if there a way to export the mesh keeping the mesh origin consistent with the camera poses?
Dear Developer!
I would like to ask you , if you can provide a workflow for segmenting Roads in Unreal Engine.
Generally , in most cases roads are generated along a spline and for example in case of "off-road" roads, its only a material not a mesh.
Can you provide a workflow how can I apply a class for roads generated along Splines?
Thanks in advance!
Regards,
Lóránt
Describe the bug
Release v4.0.1 is not work under UE 5.3.2
To Reproduce
I have followed the installation guide
Expected behavior
Unreal project opens without any problems.
Screenshots
Following error occurs while opening the project.
Assertion failed: ModuleManager.IsModuleLoaded(ModuleName) [File:D:\build\++UE5\Sync\Engine\Source\Runtime\Core\Public\Modules\ModuleManager.h] [Line: 309] Tried to get module interface for unloaded module: 'ConsoleVariablesEditor'
UnrealEditor_MovieRenderPipelineSettings
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_EasySynth!USequenceRenderer::USequenceRenderer() [C:\Users\jovan\Desktop\EasySynth\HostProject\Plugins\EasySynth\Source\EasySynth\Private\SequenceRenderer.cpp:86]
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_Core
UnrealEditor_Core
UnrealEditor_Projects
UnrealEditor_Projects
UnrealEditor
UnrealEditor
UnrealEditor
UnrealEditor
UnrealEditor
UnrealEditor
UnrealEditor
kernel32
ntdll
Configuration (if applicable):
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.