Git Product home page Git Product logo

meshoptimizer's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

meshoptimizer's Issues

simplifySloppy doesn't have a way to specify the target error

Hi,

I'm playing a bit with your great library in order to decimate some over-detailled 3D models, before using the result as basis for a collision mesh.

The input I have is therefore basically a list of vertex positions, and a list of indices (not other vertex attributes).

I'm first running meshopt_generateShadowIndexBuffer() in order to have all vertices with a similar position to be collapsed.

Basically, my issue is that I haven't managed to get a result that's simplified enough using meshopt_simplify(). The initial mesh has 345k triangles, the output has 285k triangles, whatever parameters I've tried.

I've set the target indices count to something very low (50) and even tried to increase the error to its maximal value (1.0f), however, that doesn't make a big difference.

Using meshopt_simplifySloppy(), results are much better (basically reaching the indices target count), however, I'm want to use an error threshold rather than a target indices count (since I don't want the artists/user to have to enter a target indices count, moreover, the input meshes are really variable, some are too detailled, some are not, so an error threshold makes much more sense to me).

So, I'm stuck with meshopt_simplify() I think, appart in the case you have an idea how to use meshopt_simplifySloppy() with an error threshold as input ?

meshopt_simplify:
simplify

meshopt_simplifySloppy:
simplifysloppy

What puzzles me a bit is that meshopt_simplify() doesn't modify the topology, despite having the error threshold set to 1.0f and the target indices count set to 10. You can clearly see that when looking at the windows of the building in the foreground. Despite the significant threshold, the windows are not simplified.

Is that normal ? Can I improve that ? Or is there a way to use meshopt_simplifySloppy() with an error threshold as input, rather than a target indices count ?

Thanks a lot for your help !!

optimizeVertexCache results

Thanks for this library ( offtopic: also thanks for the roblox graphics api stats)

I am replacing Forsyth's optimizeFaces with optimizeVertexCache.

I have tested on models from https://casual-effects.com/data/.
On most of the models the acmr is the same or better with optimizeVertexCache.

But for some models like Sports Car or Road Bike the acmr is slightly larger with optimizeVertexCache. Difference is 0.01-0.04

I just wanted to ask if this is a known difference? I have used meshopt_analyzeVertexCache for the test.

Preserve extras for nodes

Hi, this is the great tool, but I face some problems:

Input gltf file has some extras in nodes:

"nodes": [
    {
      "name": "parent",
      "extras": {
        "xxx": 555
      },
      "children": [1]
    },
    {
      "name": "mesh",
      "mesh": 0,
      "extras": {
        "yyy": 666
      }
    }
  ]

It would be nice to preserve extras not only for materials, but for nodes too:

gltfpack-0.14-windows> .\gltfpack.exe -i test.gltf -o test_out.gltf -v -kn -ke -noq

"nodes": [
    { "mesh": 0 },
    { "name": "parent", "children": [2] },
    { "name": "mesh", "children": [0] }
  ]

simplifySloppy aggressively welds vertices causing issues for attribute discontinuities

When I use the aggressive option in gltfpack I find that the model shading looks off. This appears to be caused by the normal being a bit messy. Attached is a screen shot from the babylon.js viewer with normal preview turned on for one of the meshes (hence the beautiful rainbow).

Screen Shot 2019-10-24 at 3 00 30 PM

I'm not sure if this is an unavoidable artifact, or something that can be improved. Also, I realize the input model is already pretty sparse. Nonetheless, the vertex removals seem reasonable, if only the normals were better preserved.

Here's an example of the command I'm using. I've also attached the input and output GLBs I'm using.

gltfpack -i /tmp/input.glb -o /tmp/output.glb -si 0.8 -sa -v

Archive.zip

simplifySloppy overestimating actual triangle count

Hi, I'm working on a project which includes automatically simplifying arbitrary meshes to various triangle counts. I've been using simplifySloppy because it seems more reliable for various poor quality input meshes, and being able to achieve very low triangle counts for low LODs.

I found that when reducing a highly detailed mesh to very low triangle counts, the grid size being used sometimes ended up being bigger than it needed to be - this is because countTriangles() wasn't taking into account that some triangles become duplicates and then get eliminated during filterTriangles(), after quantization.

I've made a fix for it here: virtalis@62ab34f. Would you like me to open a pull request, or was it left this way intentionally for speed? It now requires creating a hash table of triangles during each loop iteration at the start of the algorithm, which will obviously slow it down a bit compared to the simple sum it was doing before.

--

I've also made some changes to prevent vertices being collapsed when their normals are drastically different, by expanding the grid into an additional 3 dimensions for normal space: https://github.com/virtalis/meshoptimizer/commits/simplifySloppy-normals. I'm not sure if these are changes you'd be interested in, but I can open another issue to discuss them if you are.

Consider adding a --version flag

Before commenting on #88 I'd wanted to make sure I had the latest version of gltfpack, and wasn't able to tell from inspecting the executable or its CLI output. A gltfpack --version flag would be helpful for that purpose. Thanks!

Using templated parameters for indices and vertex positions

If templated parameters were used for indices (common types include uint8_t, uint16_t, uint32_t, unsigned short, and unsigned int) and vertex positions (float, double), the library would be more flexible with regard to the potential needs of users.

gltfpack does not preserve node hierarchy

Is there some option to keep the "body" nodes in the image below:
image

because running gltfpack without parameters results in a flat node list:

image

this makes it impossible to move the bodies

[gltfpacker] Feature request: add support for unlit materials (KHR_materials_unlit extension)

Hi,

In order to test the models from the glTF-Sample-Models more easily I've made a small set of scripts to download the latest models and run them through gltfpacker: https://github.com/TimvanScherpenzeel/gltfpacker-sample-model-test. It is very much hacked together but it does the job, perhaps parts of it can also be useful to you too.

I haven't spotted any bugs or unhandled cases except for a model with unlit materials (using the KHR_materials_unlit extension). I think the objective of the extension is in line with the goals of gltfpacker.

Original
Screen Shot 2019-07-03 at 21 53 07

gltfpacker
Screen Shot 2019-07-03 at 21 46 36

UnlitTest.zip

Kind regards,

Tim

Simplification runs, but the result has many defects/issues

I am using meshoptimizer's simplification methods to simplify 3D scans w/ have been triangulated using marching cubes. The meshes are far more dense than they need to be, so I am attempting to simplify them down to 20% of their original density.

Unfortunately the result is unusable and full of holes and other defects. Is this normal?

Here is an example:

Input Mesh

image

Simplified Result

image

I am using MeshLab to visualize the result. The meshes are exported to STL format.

The mesh format I am using internally in my software is that of the VCG library, but I think I am appropriately converting the data such that meshoptimizer's simplification methods can operate correctly.

Here is how I am converting the data:

const MeshVert* __restrict root = input.vert.data();
std::vector<unsigned> remap;
std::vector<vcg::Point3f> vertices;

for (const auto& v : input.vert) 
    vertices.emplace_back(v.P());

for (const auto& f : input.face) {
    remap.emplace_back(unsigned((MeshVert*)(f.cV(0)) - (MeshVert*)root));
    remap.emplace_back(unsigned((MeshVert*)(f.cV(1)) - (MeshVert*)root));
    remap.emplace_back(unsigned((MeshVert*)(f.cV(2)) - (MeshVert*)root));
}

const size_t index_count = remap.size();
std::vector<unsigned> simplified(index_count);
size_t target_count = simplification_factor * index_count;
float target_error = 1e-3f;

if (restricted)
    simplified.resize(meshopt_simplify(&simplified[0], remap.data(), index_count, (const float* __restrict)&vertices[0], vertices.size(), sizeof(vcg::Point3f), target_count, target_error));
else
    simplified.resize(meshopt_simplifySloppy(&simplified[0], remap.data(), index_count, (const float* __restrict)&vertices[0], vertices.size(), sizeof(vcg::Point3f), target_count));

I'm a little confused by the concept of "Index" in this mesh processing library. Are the indexes here referring to triplets of offsets from Vertex_Ptr* + 0 which represent each face/triangle?

The VCG mesh has a vector of vertices, each represented by a Point3f type (the internal data of which is 3 floats - X, Y, and Z). The faces contain 3 vertex pointers. I compute the offset/index from these pointers by subtracting the start of the vertex array (root).

Is this the correct approach? Am I doing something wrong? The result looks reasonable, but not entirely correct. Perhaps this is just the nature of decimation?

how to set BASISU_PATH?

Error: basisu is not present in PATH or BASISU_PATH is not set

this is the folder structure
Screenshot 2020-04-22 at 7 47 16 PM

this is the env file
Screenshot 2020-04-22 at 7 47 58 PM

what am I missing??? it keeps saying set PATH

Object space mesh error from meshopt_simplify()?

I kinda calculate an approximate error metric roughly based on hausdorff distance between the 2 mesh after generating the simplified one. Is there any chance to avoid this calculation and use some consistent object space error metric used internally by meshopt_simplify()?

Thanks in advance.
BTW very handy library. much appreciated.

Any plans to integrate with Blender and/or FreeCAD?

Looks like there is potential for integration with some widespread software offering mesh optimization functionality, but without all the cool tricks.

Have you considered contacting them in this matter?

Add -kn flag to readme?

This perfectly solved the issue the issue of trying to reference buildings by name in a scene but was only able to find it by searching closed issues. Perhaps worth adding to readme for future users?

Some metrics

Hi Arseny,

first of all, awesome project, really cool.

Second, sorry if I opened an issue, but this is something more like a request for any info regarding some performance metrics.

I'm developing a jvm port of Assimp here and I should implement some state-of-the-art post-process techniques to improve the time required to render them.

Before ending up on your repo, I was actually looking for the best method to improve vertex caching.
And I guess it's this one: "An Improved Vertex Caching Scheme for 3D Mesh Rendering". Because it's even better than Amd Tootle by a 10% (although it requires ~100x time)

Anyway, as I already said, I'd be interested to know if you have ever taken any metrics about your mesh optimizer. Or if you have some considerations/feedbacks between implementing simply a lone vertex caching optimization and a full-pipeline optimization like your project.

Thanks in advance

Is uneven decimation normal on some models ?

Hi, I integrated your optimizer for generating collision meshes by using meshopt_simplify followed by meshopt_optimizeVertexFetch. Unfortunately, on some meshes, the decimated result has some parts aggressively simplified, and some other parts have redundants triangles conserved; vertices have no attributes other than their position.
Does it come from the topology of the mesh ? My settings ?
The most noticeable come from an airport terminal, as you can see here; some parts are really reduced while others seem to be ignored.
The target error were 0.001, then 0.1 and the vertices conserved were respectively 64.5% and 63%.

Models as used (only positions for vertex attribute), exported as .OBJ:
Original Decimated

gltfpack: Error running through large gltf file

Running gltfpack from npm.
gltfpack -i sourcefile.gltf -o output\destination.gltf

Error:

exception thrown: RuntimeError: memory access out of bounds,RuntimeError: memory access out of bounds
    at wasm-function[30]:0xaed
    at wasm-function[52]:0x1acd
    at wasm-function[845]:0x238d1
    at wasm-function[847]:0x23d08
    at wasm-function[850]:0x24121
    at wasm-function[410]:0xd47a
    at wasm-function[786]:0x20e32
    at Module._main (\Roaming\npm\node_modules\gltfpack\bin\gltfpack.js:2:84300)
    at callMain (\Roaming\npm\node_modules\gltfpack\bin\gltfpack.js:2:85315)
    at doRun (\Roaming\npm\node_modules\gltfpack\bin\gltfpack.js:2:85893)

Input model is a large gltf file (440 MB gltf + 502MB bin).

Using overdraw optimization by itself

It would be nice to be able to use the overdraw optimizer by itself both for testing purposes but also because I am using unusual geometry with many concavities, which means that the pixel processing far outweighs the cost of vertex transformation.

gltfpack: Validation errors (MESH_PRIMITIVE_ATTRIBUTES_ACCESSOR_INVALID_FORMAT)

Hi

The optimized files created via gltfpack.exe seems to contain some incorrect attributes and cannot be opened by some gltf importer (e.g. the Windows 10 build in 3D viewer doesn't open the files).

The gltf viewer on https://gltf-viewer.donmccurdy.com/ is able to open the files but returns following errors

MESH_PRIMITIVE_ATTRIBUTES_ACCESSOR_INVALID_FORMAT | Invalid accessor format '{VEC3, BYTE normalized}' for this attribute semantic. Must be one of ('{VEC3, FLOAT}'). | /meshes/0/primitives/0/attributes/NORMAL
-- | -- | --
MESH_PRIMITIVE_ATTRIBUTES_ACCESSOR_INVALID_FORMAT | Invalid accessor format '{VEC3, UNSIGNED_SHORT}' for this attribute semantic. Must be one of ('{VEC3, FLOAT}'). | /meshes/0/primitives/0/attributes/POSITION
MESH_PRIMITIVE_ATTRIBUTES_ACCESSOR_INVALID_FORMAT | Invalid accessor format '{VEC2, UNSIGNED_SHORT}' for this attribute semantic. Must be one of ('{VEC2, FLOAT}', '{VEC2, UNSIGNED_BYTE normalized}', '{VEC2, UNSIGNED_SHORT normalized}'). | /meshes/0/primitives/0/attributes/TEXCOORD_0
ACCESSOR_NON_UNIT | 2368 accessor elements not of unit length: 0. [AGGREGATED] | /accessors/0

Segfault gltfpack linux 64bit fedora 30

master cc463d7

./gltfpack -v -i /home/arpu/Work/projects/vrland_assetssrc/models/bentley/bentley.glb -o bent.glb                                                                                                                                    ✔  10158  15:57:25 
input: 174 nodes, 170 meshes, 0 skins, 0 animations
meshes: 1332023 triangles, 1086730 vertices
[1]    3449 segmentation fault (core dumped)  ./gltfpack -v -i  -o bent.glb

i think i need debug builds and do some gdb

gltfpack: option to disable mesh consolidation?

I want to get the "goodness" of gltfpack while still allowing a web application to access some individual meshes that I've grouped for accessing individually in a web app.

Perhaps I'm doing something that should be done through separate gltf files, in other words maybe it doesn't make sense to have this option since it's a core feature of gltfpack?

I can provide code example if helpful but at least wanted to start the thread to see if this makes sense as an issue.

I'm using this component for A-Frame to access individual parts of the gltf:
https://github.com/supermedium/superframe/tree/master/components/gltf-part/

Versions:
A-Frame Version: 1.0.4 (Date 2020-05-07, Commit #9022b97e)
THREE Version (https://github.com/supermedium/three.js): ^0.115.1
installed gltfpack with npm install -g gltfpack, I think it's using commit hash 9e89bf3

gltfpack: Significant UV distortion for models with high tiling factors

I think I'm seeing a similar issue in v0.13 of gltfpack. Testing against the model here: https://sketchfab.com/3d-models/melodia-city-hotel-a2fb8e4065ce470296d6d801daa37f18, with only default gltfpack options used:

before after
before after

annotated diff:

diff

^Much of that diff is in reflections, which probably relates to merging meshes with transparency, and that seems fine. But the highlighted areas on the side of the building and the palm trees in front of the building show noticeable UV shifts that might suggest an issue:

Screen Shot 2019-12-24 at 2 02 23 PM

Consider MESHOPT_-prefixed extensions

As the KHR_quantized_geometry and KHR_meshopt_compression extensions are pseudo-extensions at this point, consider reserving the MESHOPT_ prefix (or similar) in the glTF repository and using that for now. Details are on GitHub, but as a short summary:

  • Vendor extensions like this are recommended for single-party needs and initial proposals.
  • EXT_ extensions are recommended for multi-party extensions that are not, for whatever reason, ratified by Khronos or protected by the Khronos IP framework.
  • KHR_ extensions are ratified by Khronos, and protected by its IP framework.

I like the idea of exploring official versions of both, but that would take some time, and best to prevent any confusion in meanwhile. Thanks! 🙂

gltfpack: preserve material extras

Currently, gltfpack discards any "extras" attached to materials when it merges materials. It would be nice if we could optionally preserve the extras when merging and only pass the areMaterialsEqual test if they are equal.

gltfpack: model origin shifted after optimization

Hey, first of all, thank you so much for your work on this tool. The filesize results are really looking great right now.

My use case is as follow : I'm trying to compress models I'll be using in a real time three.js scene. I use blender to correct my models and set the gltf file up. I use the CLI gltfpack to optimize my filesize.

When I use the .gltf file straight out of blender in my ThreeJS scene, the origin I set in Blender is respected. However, after having it go through gltfpack, the origin is shifted, and looks like it's somewhere in the upperleft corner relative to the model. This happens with no options, -c option, and -cc option. Basically as soon as it goes through gltfpack.

I'm sorry I don't know more, but it might be a bug with how origins are handled? This is also happening with every model I try this on. If this is user error, I'd be more than happy to learn from my mistakes!

You can find the original GLTF file with correct origin + the one that was converted by gltfpack in that zip: https://we.tl/t-4H8zlF8exe

Thank you!

Stuggle building with CMake

Hi,
I would like to use gltfpack to optimize animated meshes for a web game. The issue is... I'm a web dev and never used cmake, I don't quite understand what I'm supposed to do with it to use gltfpack. Your readme states :

On Windows (and other platforms), you can use CMake:

cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_TOOLS=ON
cmake --build . --config Release --target gltfpack

I tried to install CMake but got not CLI, only a GUI, and I have not idea where I'm supposed to run these commands.
Could you lead me to some online resources explaining how to use CMake in this context please ?

[gltfpack] Issue with the displacement of position in animated skinned meshes

Hi Arseny,

Thank you for all your hard work on gltfpack!

I've been testing various glTF models from the glTF-Sample-Models repo and came across a few that have issues after packing with gltfpack with default settings.

  • BrainStem displaces the position of the mesh from the original position. Animations themselves appear to work correctly.

Correct
Screen Shot 2019-06-27 at 19 59 14

Incorrect
Screen Shot 2019-06-27 at 19 59 29

  • CesiumMan has the same issue, displaces the position of the mesh from the original position. Animations themselves appear to work correctly.

Correct
Screen Shot 2019-06-27 at 20 03 45

Incorrect
Screen Shot 2019-06-27 at 20 03 16

  • AnimatedCube does not spin anymore after conversion. There is an animation track but it does not appear to play correctly.

Incorrect
Screen Shot 2019-06-27 at 19 59 49

  • BoxAnimated does not animate anymore after conversion. There is an animation track but it does not appear to play correctly.

  • TriangleWithoutIndices does not render correctly, the viewer throws the following error: Cannot read property 'updateMatrixWorld' of undefined.

I've created a fork of three-gltf-viewer that uses the latest versions of lib/GLTFLoader.js and js/meshopt_decoder.js from this repo (using raw.githack.com) directly to test the implementation more easily, perhaps it can also be of use to you during development.

Kind regards,

Tim

Different vertex count between tiny obj loader and fast obj loader

Good afternoon, hope you are having a nice xmas break.

I have been using fast obj for my vulkan code and worked quite well. Now I need to merge back into my main engine, where I have a mesh compiler which uses tiny obj loader, when I ported the code to fast obj I started noticing this issue, in loading the file I am getting an extra vertex from fast obj.
Tiny obj: 3792
Fast obj: 3793

Maya reports 3792:
image

This is an issue on my end because I am exporting extra data from Maya, like tangents and skinning data, of which I use the same obj indexing to fetch. Having the extra array I get an out of bound assertion.

Here below (and attached) a sample code to reproduce:

#define TINYOBJLOADER_IMPLEMENTATION
#include "tiny_obj_loader.h"

#ifndef _CRT_SECURE_NO_WARNINGS
#define _CRT_SECURE_NO_WARNINGS
#endif
#define FAST_OBJ_IMPLEMENTATION
#include "fast_obj.h"

#include <iostream>

const char *PATH = "test.obj";

int main() {
  // loading the obj using tiny
  tinyobj::attrib_t attr;
  std::vector<tinyobj::shape_t> shapes;
  std::vector<tinyobj::material_t> materials;

  std::string warn;
  std::string err;
  bool ret = tinyobj::LoadObj(&attr, &shapes, &materials, &warn, &err, PATH);
  if (!ret) {
    printf("Error loading %s: file not found\n", PATH);
    return 0;
  }

  //loading using fast obj
  fastObjMesh *obj = fast_obj_read(PATH);
  if (!obj) {
    printf("Error loading %s: file not found\n", PATH);
    return false;
  }

  //now compare
  printf ("Tiny vertex count %i \n" ,static_cast<int>(attr.vertices.size()/3));
  printf ("Fast vertex count %i" ,obj->position_count);
}

Here the output:
image

Am I doing anything wrong? I have thought on what the issue could be and did not find a solution yet. I have also tried per-triangulating the mesh before export, and got the same result.
If anything else is needed on my side please let me know.

The main reasons I am switching to fast obj is:

  1. hopefully faster debug and release performance then tiny.
  2. Since I am about to use mesh optimizer anyway I can just use fast obj and have one dependency, not two.

Best regards

M.

solution/code/model link: https://1drv.ms/u/s!Ai0n7iKmKMz0gthD0wT4C3KUKbW5Pw?e=eKo2ZN

Questions: Material Index and UVs

Hi,

In my model, there can be two adjacent-co-planar triangles with different material IDs. They should not be merged. How to handle this case?

Moreover, I have UVs defined at Triangle Vertex. I mean, for a given Cube model, I have,
Vertices: 24
Vertex Normals: 24
Indices: 36
UVs: 36 (Every triangle will have 3 UVs; (one for each vertex in triangle)

Now I could not find a way to set UVs in the simplified mesh from the original mesh.
Any clue?

Thanks in advance.

Add simplify option to gltfpack

Thank you for this super useful library! I would like to further optimize models by running simplification. It would be really helpful if gltfpack allowed me to do this one shot.

For example, -si 0.4 could reduce the poly count to 40% of the original.

./gltfpack -i glTF-Sample-Models/2.0/BarramundiFish/glTF/BarramundiFish.gltf -o ./glTF/BarramundiFish-simple.gltf -si 0.4

Encode/decode vertex deinterleaved

Hi @zeux,

Maybe I'm getting it wrong, but if I want to encode/decode deinterleaved vertex arrays it seems that I'm forced to use always multiple of 4 bytes per vertex attribute:

https://github.com/zeux/meshoptimizer/blob/master/src/vertexcodec.cpp#L1070

For instance, it means that I would need to use floats instead of shorts for quantized positions. That probably is not an issue when encoding, as general compressors are going to deflate this unused space for each float.

The problem comes for me when decoding, I usually upload quantized attributes to the graphics card and dequantize them in the shader (normals, positions, etc). If I use meshoptimizer decode, will l be spending extra memory in the GPU if I upload the buffer directly from decode? I guess yes. In that case I would need to think if it's better to spend the extra time on copying arrays or spend the extra memory in GPU.

Is what I said correct? Any advice on the matter?

Thanks!

missing texture reference when using gltf pack

Hi, for models that have textures, after I run gltf pack and attempt to load the model im getting missing texture reference errors
eg: gltfpack -i Sponza.gltf -o scenepacked.glb

from this dataset:
https://github.com/KhronosGroup/glTF-Sample-Models/tree/master/2.0/Sponza/glTF

if I then try to view my packed .glb into something like the gltf viewer
(https://gltf-viewer.donmccurdy.com/)

I get this:

IO_ERROR Node Exception: TypeError: Failed to fetch /images/0/uri
IO_ERROR Node Exception: TypeError: Failed to fetch /images/1/uri
IO_ERROR Node Exception: TypeError: Failed to fetch /images/2/uri
...etc

I've also tried using Babylonjs loader , and im getting missing texture errors.
If i try to pack a gltf that does not contain textures to begin with, everything works.

Please advice.

Unprefixed CMake options polluting global namespace

MeshOptimizer uses generic unprefixed names for its CMake options. For example:

option(BUILD_DEMO "Build demo" OFF)
option(BUILD_TOOLS "Build tools" OFF)
option(BUILD_SHARED_LIBS "Build shared libraries" OFF)

There's no problem if MeshOptimizer is compiled as a standalone project.

However, when MeshOptimizer is compiled as a part of a complex CMake build process, these option names do pollute the global namespace.

Popular libraries use prefixes for their CMake options. For example, Assimp uses the ASSIMP_ prefix (ASSIMP_BUILD_ASSIMP_TOOLS, ASSIMP_BUILD_ASSIMP_TESTS, etc), and GLFW uses the GLFW_ prefix (GLFW_BUILD_EXAMPLES, GLFW_BUILD_DOCS, etc).

Similar approach should be adopted for MeshOptimizer.

Attribute-aware error metrics for simplification

Hi! I'm playing with https://developer.nvidia.com/orca/amazon-lumberyard-bistro dataset and meshoptimizer and I've noticed this particular failcase related to the way the corners were authored.

For example, here is the original chair mesh:
image

Notice the rounded corners with shared vertices. They survive the first pass of meshopt_simplify to half the number of triangles fine:
image

However, once the triangles between two sides facing at 90deg get folded and the sides start sharing the vertices, the vertex normals can no longer be correct:
image

What would be a solution to (automatically) preventing this?

I was thinking of adding additional custom skip code in the 'pickEdgeCollapses' loop if angle between vertex normals is above certain threshold but I'm sure there's a better/simpler solution, perhaps already there? :)

(instead of preventing collapse, could also allow it but duplicate verts so normals aren't shared?)

Thanks for the great library!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.