Git Product home page Git Product logo

blainder-range-scanner's Introduction

Range scanner simulation for Blender

This Blender add-on enables you to simulate lidar, sonar and time of flight scanners in your scene. Each point of the generated point cloud is labeled with the object or part id that was set before the simulation. The obtained data can be exported in various formats to use it in machnie learning tasks (see Examples section).

The paper can be found here: https://www.mdpi.com/1424-8220/21/6/2144

Original scene

The figure shows a group of chairs (left), a Blender camera object and a light source (right). The chair legs and seats are randomly colored differently to make it easier to distinguish them in the following images.

3D point clouds

In each of the four figures, a generated, three-dimensional point cloud can be seen. The different colors for each data point have different meanings:

  • top left: original color from object material
  • top right: grey scale intensity representation
  • bottom left: each color stands for one object category (blue = floor, red = chair)
  • bottom right: each color represents one object subcategory (blue = floor, red/green = seats, orange/turquoise = legs)

Note: the left and middle chairs have the same colors because the subobjects were classified identically (see Classification).

Supported formats:

2D annotated images

In addition to the 3D points clouds, the add-on can also export 2D images.

  • top left: the image rendered from Blenders engine
  • top right: depthmap
  • bottom left: segmented image
  • bottom right: segmented image with bounding box annotations

Supported formats:



Table of Contents

Installation
Dependencies
Usage (GUI)
Usage (command line)
Visualization
Examples
Scene generation
Development
About
License



Installation

It is recommended to use Blender 3.3 LTS. The next LTS will be available with Blender 3.3 (see here) for which the add-on will be updated. Support for that version is prepared in this branch. Feel free to open an issue if you face problems with Blender 3.x while using that branch.

WARNING: DO NOT install the addon via both ways or the two versions are mixed up and cause errors.

For GUI usage

  1. Clone the repository. This might take some time as the examples are quite large.
  2. Zip the range_scanner folder.
  3. Inside Blender, go to Edit -> Preferences... -> Add-ons -> Install... and select the .zip file.

For script usage

  1. Clone the repository.
  2. Copy the range_scanner folder to Blender 3.3/3.3/scripts/addons_contrib.

The full installation of Blainder and all dependencies inside a fresh Blender copy can be done using the following commands on Ubuntu:

sudo apt-get update
sudo apt-get -y install git

wget https://download.blender.org/release/Blender3.3/blender-3.3.5-linux-x64.tar.xz
tar -xf blender-3.3.5-linux-x64.tar.xz

git clone https://github.com/ln-12/blainder-range-scanner.git

mkdir ./blender-3.3.5-linux-x64/3.3/scripts/addons_contrib/
cp -r ./blainder-range-scanner/range_scanner ./blender-3.3.5-linux-x64/3.3/scripts/addons_contrib/

./blender-3.3.5-linux-x64/3.3/python/bin/python3.10 -m ensurepip
./blender-3.3.5-linux-x64/3.3/python/bin/python3.10 -m pip install -r ./blainder-range-scanner/range_scanner/requirements.txt

./blender-3.3.5-linux-x64/blender

For Windows, you have to run the same commands after installation via PowerShell (as administrator):

cd 'C:\Program Files\Blender Foundation\Blender 3.3\'
.\3.3\python\bin\python.exe -m ensurepip
 
.\3.3\python\bin\python.exe -m pip install -r <Path-To-Blainder>\blainder-range-scanner\range_scanner\requirements.txt



Dependencies

To use this add-on, you need to install

This add-on makes use of the following projects:

Automatic installation

After installing the add-on, you see the panel shown below to install the missing dependencies. You might need administrative priviledges to perfom this action (more info).

alt text

Manual installation

Open a terminal (as admin on Windows) and navigate into blainder-range-scanner/range_scanner. Then run on of the following commands depending on your system.

Windows

"C:\Program Files\Blender Foundation\Blender 3.3\3.3\python\bin\python.exe" -m pip install -r requirements.txt

WARNING: Make sure that the packages are installed inside C:\program files\blender foundation\blender 3.3\3.3\python\lib\site-packages, not C:\users\USER\appdata\roaming\python\python39\site-packages\ or Blender won't find them!

macOS

/Applications/Blender.app/Contents/Resources/3.3/python/bin/python3.9m -m pip install -r requirements.txt



Usage (GUI)

In Blenders 3D View, open the sidebar on the right (click on the little <) and select Scanner.

alt text

If necessary, install the required dependencies (see Automatic installation).

Please note that not all of the following options are available for all scanner types.


Gerneral settings

alt text

Scanner object

Select the object in the scene which should act as range sensor. This object must be of type camera.

Join static meshes

If enabled, all static meshes in the scene are joined into one mesh prior to the simulation.

Generate point clouds

This operator starts the actual scanning process. You should set all parameters (see the following sections) before you hit the button. It is generally recommended to open the command window to see any warning or errors occuring during simulation.


Presets

alt text

Scanner category / name

In this section you can select a predefined sensor. First, choose one of the categories Lidar, Sonar or Time of flight. Then you can select a specific sensor.

Load preset

When pressing Load preset, all parameters are applied automatically.


Scanner

alt text

Scanner type

The scanner type lets you define the operation mode of the sensor. Depending on the selected type, you can further specify the characteristics of the sensor.

Field of view (FOV) / Resolution

The fields of view define the area which is covered by the scan horizontally and vertically. The resolution indicates the angle between to measurements.

Rotation

In case of rotating sensors, the number of rotations per seconds is used to simulate correct measurements during animations.


In the case of the sideScan scanner type, you can set additional parameters (more info) and define the water profile for this scene.

alt text

Water profile

The water surface level defines the z coordinate in your scene which is refered as a water depth of 0 meters. In the table below, you can fill in values for different water layers. Keep in mind to always start with a layer at 0m depth. This approach is used to quickly adjust the water level without the need to move the whole scene.

alt text

Example: you set a water surface level of z = 10 and add three layers at a depth of 0m, 3m and 6m. This means there is a layer between 0-3 m, another one between 3-6 m and a last layers which starts at 6 m depth and is infinitely deep (until it hits the bottom). Related to the scene's z coordinate, this means that you have borders between the layers at z = 7 and z = 4.


Reflectivity

The minimum reflectivity needed to capture a reflected ray is approximated by the following model. At a distance of dmin a reflectivity of rmin is needed, while at dmax the reflectivity needs to be greater than rmax . Measurement below dmin are captured as long as the reflectivity is >0. For distances above dmax , no values are registered.

alt text

The following panel lets you define the parameters.

alt text

You can set the minimum and maximum reflectivity for the scene's targets at given distances.

The maximum reflection depth defines how often a ray can be reflected on surfaces before it gets discarded.

The reflectivity is defined by the material:

Diffuse material

alt text

Diffuse material can be defined by changing the Base Color attribute of the Princinpled BSDF shader. The reflectivity is taken from the alpha parameter of a materials color.

Texture

alt text

To use a texture, add an Image texture node and link it to the input of Base Color.

Glass

alt text

To model glass objects, simply use the Glass BSDF shade and set the correct index of refraction with the IOR attribute.

Mirror

alt text

To simulate a fully reflecting surface, you can set the Metallic attribute of the Princinpled BSDF shader to 1.0.


Classification

Objects can beclassified in the two following ways:

Via custom properties

Select an object to add a custom property categoryID to set the main category (here: chair) and partID to set the subcategory (here: legs/plate). If no categoryID is provided, the object name is used as the category name instead. If no partID is given, the material name is used (see below).

Via different materials

The main category has to be set like explained above via categoryID. To differentiante parts within a single object, you can select the faces in edit mode and assign a specific material (here: leg/plate). Each subobject with the same material is treated as one category, even if they belong to different objects.


Animation

alt text

The settings in this panel correspond to the values inside Blenders Output Properties tab. You can define the range of frames, the number of skipped frames in each animation step and the number of frames per second (relevant for rotating scanners). Any techniques inside Blender to simulate motion and physics can be applied.


Noise

alt text

Constant offset

The constant offsets are applied to each measurement without any variation. You can choose between an absolute offset which is the same for each distance or a relative offset as percentage of the distance.

Random offset

To simulate random errors during the measurement, you can specify the distribution with the given parameters.


Weather simulation

Rain

alt text

To simulate rain, just set the amount of rain in millimeters per hour (see this paper).

Dust

alt text

For dust simulation, you can set the parameters to define a dust cloud starting at a given distance and with a given length (see this paper).


Visualization

alt text

If this setting is enabled, the generated point cloud is added the to Blender scene.


Export

alt text

Raw data

This add-on can output the generated point clouds as .hdf5, .csv, .ply and .las files.

The option Export single frames defines if each animation frame should be exported in a separat file or if all steps are exported into a single file.

Iages

In the case of time of flight sensors, you can furthermore export the rendered image along with a segemented image (including pascal voc object descriptions) and a depthmap. You can specify the value range for the depthmap. All depth values at the minimum are white, whereas values at or above the maximum value appear black. Color values in-between are linearly interpolated.


DEBUG

These options are only meant for debugging the add-on. Use them with caution as adding output/line to the process can lead to significant perfomance issues!



Usage (command line)

When the code is located inside the addons_contrib directory (see script usage), you can use the scanner function via script the following way:

import bpy
import range_scanner

# Kinect
range_scanner.ui.user_interface.scan_static(
    bpy.context, 

    scannerObject=bpy.context.scene.objects["Camera"],

    resolutionX=100, fovX=60, resolutionY=100, fovY=60, resolutionPercentage=100,

    reflectivityLower=0.0, distanceLower=0.0, reflectivityUpper=0.0, distanceUpper=99999.9, maxReflectionDepth=10,
    
    enableAnimation=False, frameStart=1, frameEnd=1, frameStep=1, frameRate=1,

    addNoise=False, noiseType='gaussian', mu=0.0, sigma=0.01, noiseAbsoluteOffset=0.0, noiseRelativeOffset=0.0,

    simulateRain=False, rainfallRate=0.0, 

    addMesh=True,

    exportLAS=False, exportHDF=False, exportCSV=False, exportPLY=False, exportSingleFrames=False,
    exportRenderedImage=False, exportSegmentedImage=False, exportPascalVoc=False, exportDepthmap=False, depthMinDistance=0.0, depthMaxDistance=100.0, 
    dataFilePath="//output", dataFileName="output file",
    
    debugLines=False, debugOutput=False, outputProgress=True, measureTime=False, singleRay=False, destinationObject=None, targetObject=None
)       





# Velodyne
range_scanner.ui.user_interface.scan_rotating(
    bpy.context, 

    scannerObject=bpy.context.scene.objects["Camera"],

    xStepDegree=0.2, fovX=30.0, yStepDegree=0.33, fovY=40.0, rotationsPerSecond=20,

    reflectivityLower=0.0, distanceLower=0.0, reflectivityUpper=0.0, distanceUpper=99999.9, maxReflectionDepth=10,
    
    enableAnimation=False, frameStart=1, frameEnd=1, frameStep=1, frameRate=1,

    addNoise=False, noiseType='gaussian', mu=0.0, sigma=0.01, noiseAbsoluteOffset=0.0, noiseRelativeOffset=0.0, 

    simulateRain=False, rainfallRate=0.0, 

    addMesh=True,

    exportLAS=False, exportHDF=False, exportCSV=False, exportPLY=False, exportSingleFrames=False,
    dataFilePath="//output", dataFileName="output file",
    
    debugLines=False, debugOutput=False, outputProgress=True, measureTime=False, singleRay=False, destinationObject=None, targetObject=None
)  





# Sonar
range_scanner.ui.user_interface.scan_sonar(
    bpy.context, 

    scannerObject=bpy.context.scene.objects["Camera"],

    maxDistance=100.0, fovSonar=135.0, sonarStepDegree=0.25, sonarMode3D=True, sonarKeepRotation=False,

    sourceLevel=220.0, noiseLevel=63.0, directivityIndex=20.0, processingGain=10.0, receptionThreshold=10.0,   

    simulateWaterProfile=True, depthList= [
        (15.0, 1.333, 1.0),
        (14.0, 1.0, 1.1),
        (12.5, 1.52, 1.3),
        (11.23, 1.4, 1.1),
        (7.5, 1.2, 1.4),
        (5.0, 1.333, 1.5),
    ],

    enableAnimation=True, frameStart=1, frameEnd=1, frameStep=1,

    addNoise=False, noiseType='gaussian', mu=0.0, sigma=0.01, noiseAbsoluteOffset=0.0, noiseRelativeOffset=0.0, 

    simulateRain=False, rainfallRate=0.0, 

    addMesh=True,

    exportLAS=False, exportHDF=False, exportCSV=False, exportPLY=False, exportSingleFrames=False,
    dataFilePath="//output", dataFileName="output file",
    
    debugLines=False, debugOutput=False, outputProgress=True, measureTime=False, singleRay=False, destinationObject=None, targetObject=None
)  

The script can then be run by executing blender myscene.blend --background --python myscript.py on the command line.



Visualization

All generated data can be shown inside Blender by enabling the Add datapoint mesh option inside the Visualization submenu. It is also possible to visualize the data as rendered, segmented/labeled and depth images (see Export).

To render .las files the tool CloudCompare can be used.

You can further use Potree Desktop to visualize the raw data. The generated .las files can be converted automatically by dragging it into the window or manually by using the Potree Converter:

 .\path\to\potree\PotreeConverter.exe .\path\to\data\data.las -o .\output_directory

This will generate a cloud.js file which you can drag and drop inside the Potree viewer.



Examples

See examples folder.

The .blend files contain preconfigured scenes. Example outputs are located inside the output folder, the used models can be found inside the models directory.



Automatic scene generation

See scene generation folder.

To generate a random landscape scene, run the following command on the command line:

python generate_landscapes.py

All parameters can be adjusted inside landscape.py. Example scenes are located inside the generated folder.



Development

This add-on is developed using Visual Studio Code and the Blender extension blender_vscode.

To run the add-on in debug mode, use the extension and start the addon from there.

If you want to have autocomplete features, consider installing the fake-bpy-module package.

Feel free to fork, modify and improve our work! We would also appreciate to receive contributions in for of pull requests. For that it would be a good idea to open an issue with your idea.



About

This add-on was developed by Lorenzo Neumann at TU Bergakademie Freiberg.

Master thesis: Lorenzo Neumann. "Generation of 3D training data for AI applications by simulation of ranging methods in virtual environments", 2020.

Paper: Reitmann, S.; Neumann, L.; Jung, B. BLAINDER—A Blender AI Add-On for Generation of Semantically Labeled Depth-Sensing Data. Sensors 2021, 21, 2144. https://doi.org/10.3390/s21062144



License

Copyright (C) 2021 Lorenzo Neumann

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.


A brief summary of this license can be found here: https://tldrlegal.com/license/gnu-general-public-license-v3-(gpl-3)

Commercial license: If you want to use this software without complying with the conditions of the GPL-3.0 license, you can get a custom license. If you wish to obtain such a license, please feel free to contact me at [email protected] or via an issue.


Chair model used: Low Poly Chair

blainder-range-scanner's People

Contributors

apalkk avatar ln-12 avatar ybachmann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blainder-range-scanner's Issues

Csv file with random CategoryID and partID

I am currently working with blender and i am trying to use a the BLAINDER add-on in order to simulate a lidar in a scene and get the point cloud data in the csv file.
My problem is even when i assign a categoryID and a partID in the custom property to different objects and different parts of the same object i still get some random number in the csv file that don't make any sense.
I want the data to be labeled in the csv file according to the custom property that i assign.
Any help will be appreciated

generate point cloud causes memory leak

**range scanner addon cause memory leak on Mac os when I click 'generate point cloud' **

To Reproduce
Steps to reproduce the behavior:

  1. open up the blender file inside.
  2. Click on one of the scene cameras in the outliner, and go into camera view.
  3. Set it as scanning object.
  4. click generate point cloud botton on the scanner panel.

Expected behavior
It should generate a set of point. But Blender goes no-responsive for a while, and the memory usage keep increasing until 100GB and more, the system gives a warning about the system running out of app memory. need to force quit Blender.

Screenshots

Screen Shot 2023-06-17 at 11 47 08

Desktop (please complete the following information):

  • OS: [Mac OS Monterey--12.3]
  • Blender version [3.3]

Additional context
It doesn't happen in all scenes, it tends to occur in larger scenes which contain more objects (but not sure about this correlation), I need to simulate e driving environment lidar scan. I tried to reduce the resolution or resolution scale, doesn't seems to help.

The file size is beyond the limit, I now share it with google drive. so here is the file, https://drive.google.com/file/d/1EW7QjIgdqLoUigjzD_oIHWHurSIZEJ3x/view?usp=sharing

Error when trying to install dependencies (Linux)

Bug Description: When installing the range scanner add-on it tells me there are missing dependencies. Upon hitting the "Install dependencies" button I receive the following output in the console:

"ERROR: Command '['/snap/blender/3082/3.4/python/bin/python3.10', '-m', 'pip', 'install', 'Jinja2==3.0.2']' returned non-zero exit status 1"

However, I can manually use that same python binary / pip to install Jinja2 and all of the other dependencies in the Readme. Despite being able to see them all listed in Blender's instance of pip (using pip list with the Blender python executable) the add-on still does not see them and still requires the "install dependencies" button to be pressed repeating the error.

When I run the subprocess command directly in Blender's console (command: subprocess.run([sys.executable, "-m", "pip","install", "Jinja2==3.0.2"],check=True) it does not error out and instead has "returncode=0". Despite this, the "missing dependencies" notice does not go away in Scanner tab.

To Reproduce:

  1. Snap install Blender (currently snap is installing Blender 3.4.0)
  2. Launch Blender using the terminal
  3. Download the Blainder repository
  4. Extract all files and zip range_scanner folder
  5. In Blender's add-on menu select the range_scanner zip folder
  6. Select "Scanner" from tab and press "Install dependencies"
  7. See output error in terminal

Expected Behavior:
I have used this add-on with Blender 2.93.9 on a Windows 7 machine and the add-on installs dependencies fine. I now need the developed pipeline to run on a Linux machine which is resulting in the error described above.

Desktop:
-OS: Linux 20.04.5
-Blender 3.4.0

How to increase contrast for depthmap?

To whom this may concern,

I am currently exploring the use of this software to simulate a drone's depth image scan of a road with potholes in blender for a university project.

As such, I have modeled a simple meshgrid with craters in it to represent a road with pothole defects, and positioned the camera directly above it (to represent a top down drone's eye view of the road) like so:
image

I have applied the same material on the meshgrid object as the one applied to the original cube in the example file "script_usage.blend".

Using the function for static scans, I have been able to successfully scan the meshgrid and output the resulting depthmap as an image to a desired file location:
image

However, the problem is that, in keeping with real world dimensions, I have set the depth of the potholes to be quite shallow wrt. the scene dimensions, with depths ranging from 25 to 75mm.
image

Therefore, there is a great lack of contrast in the resulting depth image:
image

I have verified that the LIDAR capabilities are working by using it to create depthmaps of other objects (like the default arrangement in "script_usage.blend").

Is there any way that I can tweak the source code to increase the contrast/sensitivity of the scan so that the difference in depth, although small, could be more apparent? Such that it looks like the result of a blender Z-pass render? An example blender-rendered depth image showcasing potholes of a similar depth, created using the same scene as shown above in the first screenshot, can be found in the image below:
image

I have tried looking around the source code for a bit but unfortunately it is beyond the scope of my undergraduate knowledge and problem solving ability.

Best regards

Rendered object surfaces in point cloud

Hi!
First of all thank you very much for sharing your code here. It will be very useful for me as I want to use it to create a synthetic dataset for a machine learning application.

I noticed that the points in the point cloud have the color of the material itself. What I would like to use are the colors of the rendered object surfaces, so like if using the ViewPort Shading mode.
I would like to know if it is possible or what I would have to do to get a point cloud that has these information.

Thank you for your help and best regards

Animation - Modifiers are removed when starting simulation

Description
I want to scan an animated tree with Blender. The tree was created with Sapling Tree Gen as curve, then an armature is created and the curve is created to mesh and wind animation is added. Before clicking "generate point clouds", the animation works. However, my simulated point clouds do not show any animation effects and after the simulation, my tree mesh is suddenly static, i.e., the windSway and Skin modifiers are missing.

To Reproduce
Steps to reproduce the behavior:

  1. Activate the sapling tree gen add-on (Edit -> Preferences -> Add-ons).
  2. Add a tree: Type Shift + A -> Curve -> Sapling Tree Gen
  3. In the GUI, change to "Settings: Armature" and activate "Use Armature" and "Make Mesh"
  4. Change to "Settings: Animation" and activate "Armature Animation"
  5. In the View Layer, collapse the "treeArm" object, select the "treemesh" object and assign a material.
  6. Set the camera to look at the tree.
  7. Configure the settings in the Blainder Scanner Add-on (here: Generic lidar, rotating, etc.). Make sure to "Enable animation".
  8. Click "Generate point clouds"

Expected behavior
The generated point cloud clearly show that the tree was moving, i.e., the point clouds from the different frames are different. Furthermore, my animated scene object stays the way it is before the Blainder simulation (with the wind modifier, etc.)

Desktop (please complete the following information):

  • OS: Windows 10
  • Blender version: 3.3

Update BlAINder for Version 3.3

"It is recommended to use Blender 2.93 LTS. The next LTS will be available with Blender 3.3 (see here) for which the add-on will be updated. Support for that version is prepared in this branch. Feel free to open an issue if you face problems with Blender 3.x while using that branch.
WARNING: DO NOT install the addon via both ways or the two versions are mixed up and cause errors."

Is there already an updated version?

How to export the 3D point cloud of the scene?

Hi, thank you very much for your work, it's great and very helpful for my research. I am having a problem and would like to get your help. Now that the plugin can export the scan image and depth map of the side scan sonar, I would like to know how to export the 3D point cloud map of this scene, which is the height information and plane coordinates of the scene.

Compatability issue with Blender 4.0

Hi,
I am trying to install Blainder for my project that was built using blender version 4.0. I am using Ubuntu and I tried to install the add on using terminal but I cannot see the add on and use it in my project.
before that I tried to install it using GUI through add-on and I was able to add the scanner tab but when I click on it I see a massage asking to install dependencies. when I click on it I have this error :
Command '['/snap/blender/4300/4.0/python/bin/python3.10', '-m', 'pip', 'install', 'Jinja2==3.0.2']' returned non-zero exit status 1.
Is it mandatory for this to work that I use same blender version (3.3) and python 3.9?

I need your guidance on this please

Not compatible with ARM64 Macs

Describe the bug
Installation fails because of h5py.

To Reproduce
Install on M1 mac.

Expected behavior
No error.

Desktop (please complete the following information):

  • OS: macOS 12, M1
  • Blender version 2.93.5

Additional context
See: h5py/h5py#1810 and h5py/h5py#1981

Blainder crashing when modifiers present in scene

In our research we perform some blainder scannings of IFC files imported with Blender BIM. In some cases, blainder crashes when hitting "Generate scans" button not performing scans. This seems to be related to modifiers being present in the scene.

Could you please clarify it this is an actual bug or whether this behaviour is expected and modifiers should thus not be used for scanning with blainder?

Steps to reproduce the behavior:

  1. Open minimum example file: https://seafile.rlp.net/d/f2a5db7f0fa043daafc0/
  2. Set path and file name for point cloud
  3. Hit 'Generate point cloud'
  4. See error

Expected behavior
Blainder performing scan according to configuration and saving output files to specified path.

Error

location: :-1
Error: Python: Traceback (most recent call last):
File "/home/kaufmann/.config/blender/2.93/scripts/addons/range_scanner/ui/user_interface.py", line 1637, in execute
performScan(context, dependencies_installed, properties)
File "/home/kaufmann/.config/blender/2.93/scripts/addons/range_scanner/ui/user_interface.py", line 1387, in performScan
modifyAndScan(context, dependencies_installed, properties, None)
File "/home/kaufmann/.config/blender/2.93/scripts/addons/range_scanner/ui/user_interface.py", line 1284, in modifyAndScan
generic.startScan(context, dependencies_installed, properties, objectName)
File "/home/kaufmann/.config/blender/2.93/scripts/addons/range_scanner/scanners/generic.py", line 228, in startScan
bpy.ops.object.modifier_apply(apply_as='DATA', modifier=modifier.name)
File "/home/kaufmann/blender-2.93.5-linux-x64/2.93/scripts/modules/bpy/ops.py", line 132, in call
ret = _op_call(self.idname_py(), None, kw)
TypeError: Converting py args to operator properties: : keyword "apply_as" unrecognized

location: :-1

Environment:

  • OS: Ubuntu 20.04 LTS
  • Blender 2.93 LTS

AttributeError: 'Scene' object has no attribute 'scannerProperties'

Hi!
I am trying to use your toolbox for my application and I would like to utilize your scripts. In order to do that I have tried to run the script_usage.blend file. However, I am getting the following error.

File "C:\Program Files\Blender Foundation\Blender 2.93\2.93\scripts\addons_contrib\range_scanner\ui\user_interface.py", line 1565, in scan_static properties = scene.scannerProperties AttributeError: 'Scene' object has no attribute 'scannerProperties'

Since I am pretty new to Blender scripting I couldn't quite figure out the issue.
I would appreciate if someone can guide me in the right direction.
Thanks!

Ability for Channels

Hello,
in my opinion the laser simulation is wrong. In the following screenshots you can see (or assume^^) that there are 122 (0,40° FOV devided by 0,33° angular resolution) points verically:

grafik

The Velodyne Ultra Puck has only 32 Channels (https://velodynelidar.com/wp-content/uploads/2019/12/63-9378_Rev-F_Ultra-Puck_Datasheet_Web.pdf), which means the points should also be 32 vertically. I got following mail from the velodyne support, since i was really wondering....

"In the Ultra puck sensor you have 32 laser beam distributed in a non linear manner in these 40 degrees vicinity giving you 0.33 degrees resolution in the middle line and it grows as you go toward the outer line."

Am i mistaken?
The ability to go with the channels would be nice.

Regards
Thomas

Export point clouds in more commonly used file formats

Currently it is only possible to export points clouds in the following formats:

  • .las
  • .hdf
  • .csv

It would be really convinient, if it was possible to export in other formats (.obj, .ply, ...).
The generated blender objects for visualization can't currently be exported using the built in blender export functions.

Either add more export options in the "scanner window" of the add-on or enable beeing able to export the visualizations via existing blender export functionality.

return array of getTargetMaterials may include 'None' elements

Describe the bug
If a material has a 'Material Output' node but there are no inpu nodes for the 'Material Output' node, the getTargetMaterials function in material_helper.py will return an array that contains a None value for this material.
This is because in the function the value of 'links' will be an empty tuple and therefore in the loop 'for link in links:' no material will be set at the current targetMaterials[materialIndex] spot.

Later on in the scanning process this causes the following exception:
File "C:\Users\Yannic\AppData\Roaming\Blender Foundation\Blender\3.6\scripts\addons\range_scanner\material_helper.py", line 104, in getMaterialColorAndMetallic if material.texture is not None: AttributeError: 'NoneType' object has no attribute 'texture'

To Reproduce
Steps to reproduce the behavior:

  1. Create a material with only an 'Material Output' Node:
    image
  2. Assign the material to an object and start a scan.

Expected behavior

  • Output an error message that tells the user that the material (and what material exactly) is faulty.
  • Optionally: Ignore faulty materials and continue scanning anyway.

Desktop (please complete the following information):

  • OS: Windows
  • Blender version: 3.6.1

If you want I can create a fix for this and make a pull request.
Adding a condition in the getTargetMaterials function to check if links == () or len(links) == 0 should do the trick.

Unable to comprehend examples

I am trying to replicate classification examples from documentation with a fresh start. I import the chair model provided by you in the .obj format.
However, I see that the chair model is imported as a whole and not separately as mentioned in the example i.e. Legs and plate.
I would highly appreciate it if someone can nudge me in the right direction.
Thanks!

ToF Sensor simulation distortion, when adding Gaussian noise

When I add Gaussian noise to the simulation of a ToF sensor (scanner type = static), there is noise, but it also seems that everything is projected onto a sphere centered at the camera's origin. Where does this distortion come from?

Thanks.

Background operation or Context override raises AttributeError

Describe the bug

The add-on crashes with Blender in background mode or when overriding the context. Both use cases raise AttributeError when calling the scan_static function (and presumably others).

This precludes headless, batch processing of .blend files.

To Reproduce

For a script named script.py that calls scan_static, and for a .blend file with the range_scanner add-on enabled, run the script from the command line like this: blender -b <blend file> -P script.py

You should see a traceback like this:

Traceback (most recent call last):
  File <script.py>, line 5, in <module>
    range_scanner.ui.user_interface.scan_static(
  File ".../range_scanner/ui/user_interface.py", line 1621, in scan_static
    performScan(context, dependencies_installed, properties)
  File ".../range_scanner/ui/user_interface.py", line 1387, in performScan
    modifyAndScan(context, dependencies_installed, properties, None)
  File ".../range_scanner/ui/user_interface.py", line 1284, in modifyAndScan
    generic.startScan(context, dependencies_installed, properties, objectName)
  File ".../range_scanner/scanners/generic.py", line 305, in startScan
    mode = bpy.context.area.type
AttributeError: 'NoneType' object has no attribute 'type'

Trying to avoid that problem by using a context override yields the following traceback:

Traceback (most recent call last):
  File <script.py>, line 29, in <module>
    range_scanner.ui.user_interface.scan_static(
  File ".../range_scanner/ui/user_interface.py", line 1564, in scan_static
    scene = context.scene
AttributeError: 'dict' object has no attribute 'scene'

The problem here is that the code expects a bpy_types.Context instance, but an overridden context is a dictionary.

Expected behavior

The add-ons in Blender's core all support overriding the context.

Desktop (please complete the following information):

  • OS: Ubuntu 20.04
  • Blender version: 2.93.6

Parallel Scanning of LiDAR sensors

Hi all,

I am writing this to ask if someone have tried to perform a scanning of multiple liDAR sensors in blender in parallel without issues?
I tried to do this using threading in python but it seems that this is not applicable with blender bpy library.
it starts scanning in parallel but then raise an error says that the process failed because of a wrong context. Also I am not if Blainder add on has an issue with parallel execution.
If someone tried to do this using a better approach it would be very helpful to share.

Issue with custom mesh.

Hi!
I would like to know if it's possible to make a scan from every kind of scene?
I'm trying to use the "lego scene" blender file from the original "nerf" dataset (file available here : https://drive.google.com/file/d/1yDB7Pmo2KSPw2d0J7E6FHvb-sU0DdTvX/view?usp=sharing), but it's not working
I'm interested about the X,Y,Z and the intensity of every scan, can you help me?

My setup : the standalone blender 2.93 with the add-on (working well with the example scene).
I first did "make single user" on the mesh (I had an issue with it)
Then i had this error :
"closestHit.color = materialProperty.color
AttributeError: 'NoneType' object has no attribute 'color'"
Do you know if there is any hack in order to make the scan possible? (

Thank you very much for your help.
ps : Thanks for sharing your code,
it would be very helpful for me if I can make this scan.

RenderSettings.resolution_percentage expected an int type

Great work with the plugin!

I was able to get things working in Blender 3.x with one small change to the code. It seems that "RenderSettings.resolution_percentage" used to allow a float, but now requires an int. My work around was to simply change lidar.py line 319 to the following:

scene.render.resolution_percentage = int(percentage)

Perhaps the UI could also be updated to reflect the type change.

I haven't run into any other issues besides this one so far. I would be willing to put together a pull request if that would be useful.

Add requirements.txt for manual pip installation of dependencies

Automatic search for dependencies did not work for me (tested Win10 and Ubuntu 20.04). When trying to install dependencies manually, I ran into an issue with laspy. By default, laspy2.x will be installed, but this is not compatible to las export in blainder.
I would suggest to add a requirements file to declare required versions: requirements.txt.
Note: I tested the las export with laspy 1.7.0, no other tests so far.

CSV exporter writes undesired blank spaces

Describe the bug
When I run a sonar scan on sonar_example.blend and output to CSV, the rows contain a bunch of blank spaces. This breaks trying to read the CSV with numpy.loadtxt. I worked around this by passing the converters argument to loadtxt.

...$ head -n 2 sonar_example_frame_300.csv 
c a t e g o r y I D ; p a r t I D ; X ; Y ; Z ; d i s t a n c e ; X _ n o i s e ; Y _ n o i s e ; Z _ n o i s e ; d i s t a n c e _ n o i s e ; i n t e n s i t y ; r e d ; g r e e n ; b l u e ;
0 ; 0 ; 1 . 5 7 0 ; 4 . 9 4 7 ; 0 . 0 6 9 ; 2 . 8 7 0 ; 1 . 5 7 0 ; 4 . 9 4 7 ; 0 . 0 6 9 ; 2 . 8 7 0 ; 0 . 6 4 7 ; 0 . 8 0 0 ; 0 . 5 0 8 ; 0 . 0 7 7

To Reproduce

Run a sonar scan on sonar_example.blend and output to CSV,

Expected behavior

no unnecessary spaces

Desktop (please complete the following information):

  • OS: Ubuntu 20.04
  • Blender version: 3.0.0

Additional context

I think the problem is with the "%.3" format notation in the CSV exporters code.

Customize LiDAR's beam distribution angle

Hi, can I customize the angle of the LiDAR beam in the vertical direction? For example, the vertical beam distribution like RS-Ruby Lite. What should I do? Thanks for your help!

Installation of the add-on

I tried to install the add-on using the provided way, which is:
(Copy the range_scanner folder to C:\Program Files\Blender Foundation\Blender 2.83\2.83\scripts\addons_contrib (Windows).)

However, when I try to activate the add on in Blender, this message appears,
error

How it can be solved?

Regards.

Trouble reproducing examples: RGB Values in tof (Kinect v2) scan

Hi,
I try to reproduce the examples from the repo. Especially, I am interested in RGB values in the point clouds from materials / textures in Blender. I tried the following, based on example_scenes/part_segmentation_and_image_rendering.blend:

  • Change material of one chair to image texture, see
    blender_chair_image_texture
  • perform scan with tof Kinect v2 scanner
  • expected result: RGB values from image texture in pointcloud
  • actual result:
    las_pointcloud_kinectv2
  • Even if I use a RGB color for the material, I get the RGB values as in the example.
    From a glimpse into the source code and the examples I figured, that the RGB scanning is acutally possible. Can you provide the correct material / texture configuration and properties to receive correct results?
    The .blend file and the used texture image can be downloaded from: https://seafile.rlp.net/d/f2a5db7f0fa043daafc0/ for reference.
    Some hints on this would be highly appreciated.
    Fabian

Export sensor poses (location and rotation)

Thanks for the nice tool.
I create a curve for the camera animation and create many frames (Blender autokeying) along the curve for a smooth sensor movement (animation) and skip every 2 to 5 frames.
Is it possible to export the sensor trajectory including the sensor poses (locations, rotations)? How can we do that?

ViewLayer does not contain object

Hello there,

I wanted to prepare different layers for sensor simulation, so I placed Collection.001 on one layer and Collection.002 in a different layer, see attached picture (still active in the View Layer)
grafik

No then I am executring the generation of point cloud for one scene, it does not work and I get the error that the other cube is not in the scene

grafik

Collection.001 and Collection.002 contain excactly the same.
Is this a bug or meant to be? If meant to be, that's alright, just wanted to check in.

Scan Parameters

Hello, I was not able to post it as pull request....
Great work so far!
I would like to have a uneven vertical FOV, for example -15° and 25°.
Also would like to save my own presets as scanner.

Many thanks!

Ocean Modifier being Disabled

I'm not sure if this is a bug or is this a requirement for the range scanner to function, I'm attempting to simulate a sonar scan that covers up to where the ocean surface meets structure (splash zone). For this I'd like to have a dynamic sea surface so I've created a plane with the ocean modifier, but every time I "generate point clouds" it seems the ocean modifier gets removed. and the scan only reflects a single frame from the distorted plane rather than a surface that changes frame by frame. Otherwise fantastic software!

Is this modifier being removed by design?

Thanks!

image

render error

i just open scence/sonar_example.blend
after click Render Animation,logs show
Render error (No such file or directory) cannot save: 'D:\Projekte\Masterarbeit\range_scanner\image_1.png0001.png'do nothing
i am sure that set the export directory is the local directory
Is there a necessary step that has not been done?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.