Comments (22)
I see, that is possible, Fuchsia code may be quite different from the v8 code so the model trained on fuchsia does not work well on v8
from ml-compiler-opt.
Yes, when training your own model, disable lto, and (of course) make sure you're passing -Oz to clang.
from ml-compiler-opt.
Hi @yundiqian, I have migrated the demo project to the chrome/v8 project and got 5% percent reduction of size in binary and I wanna know if I need to regenerate the saved model or use the exactly one generated by Fuchsia etc?
I'm a little confused, to be clear, which model caused 5% percent reduction of size on which binary?
emm.. I have tried three projects using the ml-compiler-opt and got 7% size-reduction on Fuchsia demo, 5% (trained 100*2000 out of time consideration)on chrome/v8 build and -2% on my personal project (-.-)
I am retraining the third model cuz I trained it with -flto last time.
plus.I am migrating the model in an Android CMake project which the strip-binary-size is 1.7MB or so
got it, so it's 3 projects instead of 2 projects :) Is the "Android CMake project which the strip-binary-size is 1.7MB or so" the 4th project different from your personal projects?
In addition to retraining without -flto, you can also try our model included in llvm --- this is a model that we found generalizable across SPEC, so probably generalizable to your project as well.
from ml-compiler-opt.
I see now - thanks!
(fwiw - LLVM_ENABLE_LTO can be enabled for clang - just no -flto for your project)
from ml-compiler-opt.
hmm...interesting, we need to look into what happens during training to debug.
Can you share your log file during training with tensorboard.dev following the instructions here: https://tensorboard.dev/#get-started? (basically running two command lines)
When running "tensorboard dev upload --logdir logs..." , set the logdir flag to be the root_dir flag you use when running train_locally.py
from ml-compiler-opt.
After tested, the origin saved model could not be reused, so close this!
from ml-compiler-opt.
When you say "the original saved model could not be reused", do you mean you could not build the compiler with it embedded, or its performance wasn't as good as the one of the model you trained on chrome/v8?
from ml-compiler-opt.
When you say "the original saved model could not be reused", do you mean you could not build the compiler with it embedded, or its performance wasn't as good as the one of the model you trained on chrome/v8?
That compilers okay, but the binary size is bigger the one which doesn't use the model so I am training the new model for my personal project.
from ml-compiler-opt.
I see, that is possible, Fuchsia code may be quite different from the v8 code so the model trained on fuchsia does not work well on v8
@yundiqian @mtrofin
Sadly found that the model specify for the project didn't work and the so size was bigger than the one not trained. Is there any way to find which compile command influences the result? May I need to close the -flto flag?
from ml-compiler-opt.
We don't support lto currently.
The model included with llvm is a reasonable reference, but we didn't use an overly comprehensive codebase when we trained it; that's why Fuchsia, for example, builds their own, which holds up well over time (as their codebase and as the compiler evolve).
from ml-compiler-opt.
We don't support lto currently.
The model included with llvm is a reasonable reference, but we didn't use an overly comprehensive codebase when we trained it; that's why Fuchsia, for example, builds their own, which holds up well over time (as their codebase and as the compiler evolve).
So if I need to regenerate the model with llvm which disable the lto?
from ml-compiler-opt.
I'll try it.Thanks!
from ml-compiler-opt.
Hi @yundiqian, I have migrated the demo project to the chrome/v8 project and got 5% percent reduction of size in binary and I wanna know if I need to regenerate the saved model or use the exactly one generated by Fuchsia etc?
I'm a little confused, to be clear, which model caused 5% percent reduction of size on which binary?
from ml-compiler-opt.
Hi @yundiqian, I have migrated the demo project to the chrome/v8 project and got 5% percent reduction of size in binary and I wanna know if I need to regenerate the saved model or use the exactly one generated by Fuchsia etc?
I'm a little confused, to be clear, which model caused 5% percent reduction of size on which binary?
emm.. I have tried three projects using the ml-compiler-opt and got 7% size-reduction on Fuchsia demo, 5% (trained 100*2000 out of consideration of time)on chrome/v8 build and -2% on my personal project (-.-)
I am retraining the third model cuz I trained it with -flto last time.
plus.I am migrating the model in an Android CMake project which the strip-binary-size is 1.7MB or so
from ml-compiler-opt.
Okay
In addition to retraining without -flto, you can also try our model included in llvm --- this is a model that we found generalizable across SPEC, so probably generalizable to your project as well.
I will try it ASAP. So many thanks~
from ml-compiler-opt.
In addition to retraining without -flto, you can also try our model included in llvm --- this is a model that we found generalizable across SPEC, so probably generalizable to your project as well.
Unfortunately, the size after trained is bigger than the origin one which applies -flto -faddrsig/-flto -Wl,-z,norelro,-z,lazy,--icf=all.. about 3% or so.
from ml-compiler-opt.
To make sure I understand: you trained a model on your project (without lto, but -Oz); and then built with that model. (also without lto, and with -Oz)
How does that size compare to all other options being the same, except building with the default heuristic?
from ml-compiler-opt.
Here is the approaches:
- normally build the project with nothing changed : binary is 1705 kb
- now delete the -flto with other flags not changed: binary is 1755 kb
- build the llvm with the latest model in llvm-project with flag(LLVM_ENABLE_LTO false ) and (TENSORFLOW_AOT_PATH)
then delete -flto, add -mllvm -enable-ml-inliner=release then got the binary is 1823kb
from ml-compiler-opt.
I haven't build the specify model now because of it needs a lot of time to train and if it's done, I'll post result here~
from ml-compiler-opt.
FYI, I've tested my personal project twice with SPEC model or not and found that the specific model is better than the SPEC but still worse than the origin one.
Here is some data(kilobytes):
origin: 1712312
close flto: 1763256
close flto and use specify mode(enable-ml-inliner): 1757576
use flto and use specify model: 1716344
from ml-compiler-opt.
hmm...interesting, we need to look into what happens during training to debug.
Can you share your log file during training with tensorboard.dev following the instructions here: https://tensorboard.dev/#get-started? (basically running two command lines)
When running "tensorboard dev upload --logdir logs..." , set the logdir flag to be the root_dir flag you use when running train_locally.py
Okay, I'll try it.
from ml-compiler-opt.
I've also tried the cronet project and the reduction is also not obvious... I doubt that if my training process is wrong?
Here is the detail when I was applying the model https://gist.github.com/Colibrow/9d2b31bc7eff127cfe74c807fce86451
And I found using flto may reduce more than applying the trained model single...And I will post the log file later~
from ml-compiler-opt.
Related Issues (20)
- 【Question】How to use GPU training, just install tensorflow-gpu? will there be better performance if using a larger model? HOT 1
- 【Question】Why use llvm-size to calculate rewards? llvm also calculates size rewards? HOT 2
- 【Question】Can you open the code of ES algorithm? HOT 2
- 【Question】What parameters need to be passed in to compile the data set? -Oz -Xclang -fembed-bitcode=all? HOT 2
- How to train a model using bin's llvmbc and llvmcmd segments?I want to optimize directly using the executable program HOT 6
- Why can’t I use llvmbc and llvmcmd of executable programs?
- questions about feature log HOT 1
- Is it not very accurate to use the size reward of the entire file as the reward for each caller-callee feature, if the file is large and has a large number of caller-callee? HOT 1
- What does the size of sequence_examples depend on, and how to set its size? HOT 5
- Does llvm-15.04 support mlgo? What versions of tensorflow and other libraries are needed? HOT 1
- Why is the length of the reward limited to 3 or more? HOT 3
- How to know the effect of model inlining when training the model? HOT 1
- how to get model.tflite file from inlining-Oz-99f0063-v1.1.tar.gz HOT 1
- Why “-static” affects the test results of the model HOT 2
- why need to calculate reward_stat? I see llvm_trainer.train use reward from sequence_example.reward HOT 1
- Can I merge all the bc files into a total bc file for training?
- How to compile other dataset using llvm's thinlto flag?
- Where do I find pretrained models for MLGOPerf? HOT 4
- `--compile_task` flag missing HOT 2
- [non-issue] MLGO Questions HOT 10
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ml-compiler-opt.