Although I learned last week that
GLTFLoader New extensions
By the end of the year, I'm not so busy , I have time to learn more about it and do the corresponding evaluation . Years later, it was finally completed .
- purpose 、 usage
- performance （ Compress 、 load 、 analysis ）
- You can use （ scene 、 Range ）
- Guess the future direction of improvement
GLTF Compression of , I think we all know
Draco, Because of adaptation GLTFLoader To the reason of the small program , I found that there are many other compression extensions , such as KHR_mesh_quantization and EXT_meshopt_compression
Quantization That's vectorization , It is to use integer data to represent the data represented by floating point number , Convenient compression storage , But it will lose precision . Before that tfjs We've seen a similar approach to our model ,gltf All grid data are stored in floating-point numbers , A single precision floating point number 32 position 4byte, A vertex 3 Number 12byte, Texture coordinates 8byte, normal ,12byte, tangent space 16byte, The amount of information attached to a vertex is needed 48byte, Then use the extension , Use
SHORT To save the vertex coordinates , Texture coordinates 4byte, Use
BYTE Store normal coordinates 4byte, tangent space 4byte（ In order not to break the standard 4 Multiple byte）, altogether 20byte So the grid after vectorization can be roughly 58.4% Compression of .
It extends to r122 Get into three. Its compressed pipeline as follows . The fifth step is the above vectorization .
because wasm Support and js The volume and worker The problem of quantity limitation , Small program is not suitable .js edition decoder Than three All big
EXT_meshopt_compression You can use the same tool
npm i -g gltfpack gltfpack The command line tool is C The project builds WASM Executive , There will be more in the future wasm Output of such projects KHR The optimized version of basis trancoder It's using assemblyscript Translate it into wasm Turn into KHR_mesh_quantization gltfpack -i model.glb -o out.glb EXT_meshopt_compression Just add one -cc Parameters can be gltfpack -i model.glb -o out.glb -cc Copy code
Use gltfpack The advantage of vectorization is that the parameters of vectorization are You can adjust Of （ need meshopt Expand ）, For example, if the normal needs to be modified with higher accuracy
-vn that will do , Specific use of reading
gltfpack -h that will do
KHR_DRACO_MESH_COMPRESSION have access to
npm i -g gltf-pipeline gltf-pipeline -i model.glb -o out.glb -d Copy code
It can also be adjusted draco Vectorization parameters of
performance （ Compress 、 load 、 analysis ）
Then we need to compare the performance of the compressed mesh
The model used is derived from glTF-Sample-Models
You can see here that there is only vertex data ReciprocatingSaw.glb It is
draco It's the best , The reason is simple ,Draco Default compression parameter ratio meshQuan Much more radical , But there's animation BrainStem.glb
meshOpt It's the best .
WaterBottle.glb It's not that bad , Because the number of vertices is limited , Texture takes up a lot of main volume .
The default parameters are different , By contrast draco You have an advantage , So compare the other compression parameters that need to be the same , This time it's just a comparison BrainStem and ReciprocatingSaw that will do , Here is the gltfpack Parameter direction draco alignment
gltfpack -i model.glb -o out.glb -vp 11 -vt 10 gltfpack -i model.glb -o out.glb -vp 11 -vt 10 -cc Copy code
You can see that even if the parameters are modified ,MeshQuan The size has not changed , Because it has become the standard , The parameters of vectorization are fixed and cannot be modified , however MeshOpt A little bit higher .
Using a compression scheme , In addition to the need to compare the compression size , There's a contrast decoder How easy it is to load （web, Small program environment ）, also DracoLoader A dozen KB Not yet
It should be noted that MeshOpt wasm Version contains two versions, one is the basic version , A use SIMD edition
But the official didn't provide it asm edition , You need to go through it manually
binaryen/wasm2js Just switch , Or use me Convert good meshopt_decoder.asm.module.js
stay decoder On a level, it feels like MeshOpt Perfect victory
Analysis time comparison
The model used here is compressed with default parameters , Power mode is high performance ,CPU At the highest frequency all the time . Try to average five times , It's all wasm edition . among MeshOpt yes SIMD edition ,Chrome 88
You can see MeshQuan and MeshOpt Compared with the uncompressed loading time, it reduces a lot ,Draco And that's a lot more , If you think about decoder The loading time depends on the first loading ,3 It's an extension Only Darco Will have an impact , Because only Draco It's the first time gltf It needs to be downloaded from the Internet wasm,MeshOpt It's serialized and stored in the string , Just use unpack that will do , No network request required .
asm Version of the analysis time comparison in the availability section
Effect comparison before and after compression
According to reason, in addition to comparing the above indicators of performance , We have to compare whether there is a problem after compression , But this is not easy to index , It still depends on whether the designer compromises , But for visual comparison , So I wrote one compressed-model-diff Tools
Yes 3 Two contrast modes and wireframe contrast Online use .
You can use （ scene 、 Range ）
Available scenarios mainly depend on Decoder It's hard to load the degree and size
You can see Draco In the small program above is more uncomfortable , and MeshQuan There is no need to decoder So the availability is the highest ,MeshOpt Just use asm Version compatible applet ios that will do .
So if you need to be compatible with small program platform ,Draco It's not appropriate , Of course, you can also load the corresponding model on the corresponding platform , However, the effect of the model needs to be fine tuned by the corresponding platform , The first mock exam is a unified platform with higher maintainability. .
So here is a separate supplement MeshOpt asm Version of decode performance
But there was a strange discovery ： The first parsing takes a lot of time , But about the third time 3 Next to uncompressed performance , The fifth approach wasm Performance of , Why is that ？ Is it a potential optimization method ？
Let's take a look at the first support of Firefox ASM How about your browser , Is there a similar situation ？
It seems that in this case, fox is right js Implementation ratio chrome Much better ,wasm On the contrary, it did not bring better results , But there is a similar pattern , Whether to explain decoder It's a need for preheating , Tell the browser decode Your code needs special optimization ？
So this is done by loading a 1.21kb Of Triangle-meshopt.glb,5 Time , Reloading test model , Record data
It seems to have no effect , Estimation is warm up Not enough times ,Triangle-meshopt.glb, Only 3 vertices , perform 5 It's only once 15 individual , I didn't go up . I'll have time to go further .
Under Firefox asm The first load time is uncompressed 1.5 ~ 2.3 times
chrome Next asm The first load time is uncompressed 3.08 ~ 4.4 times
Of course, it's just a wechat app IOS Next ,IPhone Of CPU Itself is better , So I can still accept , If you can find warm up Method , The first 1 The second load time is equal to the second load time 5 The second is better .
After all, vectorization is a lossy compression method with lost precision , But it's the same
KHR_mesh_quantization Introduction , It's a compromise of precision and size
Vertex attributes are usually stored using FLOAT component type. However, this can result in excess precision and increased memory consumption and transmission size, as well as reduced rendering performance.
KHR_mesh_quantization The disadvantage of this method is that the vectorization parameters are fixed , Can't modify , The benefits are the same , No need for extra decoder, and
EXT_meshopt_compression The upgrade version , Customizable vectorization parameters , For example, when the normal accuracy is high, the capacity can be increased
KHR_draco_mesh_compression This is a well-known extension , In this evaluation, the compression ratio of pure vertex model is the highest , But by contrast , Its decoder Size and decode The performance is not outstanding .
EXT_meshopt_compression It's all fixed conversions , For example, if the size of the model itself is very small , It also uses a small range of floating-point numbers , At this time, the loss of accuracy is relatively large , The solution is to increase the storage capacity , What's simpler is that , Zoom in on the model , Make the distance between the vertices larger , It can be reduced when using . The text description may not be so intuitive. Let's look at the moving picture .
Model enlargement 10 After The Times , Not the same model version ignores color changes
The difference of tundish is negligible , But there are still texture offsets below .
Guess the future direction of improvement
From the above fixed vectorization results in the model in a small floating-point range, the error problem can be speculated that the next step can be optimized ： Dynamic Vectorization + The offset , Visual inspection can further provide accuracy and compression ratio , however decoder It's going to take more time .
For example, the model
BoundingBox.xyz The maximum value is mapped to the 0-1, Then map to integer / Customize X
Byte, Of course, the sub models in the model mesh It can be the same , It's fine too xyz Separate maintenance and so on .
It's just ideas, of course , The purpose of this paper is to find an extension of cowhide
MeshOpt, But I want to test and verify the usability of the project , It's also a process of exploration .