Git Product home page Git Product logo

Comments (10)

zhuangh avatar zhuangh commented on May 7, 2024 3

Hi @xhlulu

Thank you very much for the question. Sorry, we do not have the document (yet).

Besides the tips from @nunescoelho , for now, you may follow the example/example_keras_to_qkeras.py to convert model to its quantized counterpart. For example, the model_quantize usage

qmodel, _ = model_quantize(model, q_dict, 4)

And use qmodel.load_weights("Your_weight_file") just like what people usually do.

If you want to see the weights, you might want to try
model_save_quantized_weights from here

def model_save_quantized_weights(model, filename=None):
or just

for layer in qmodel.layers:
  try:
    if layer.get_quantizers():
      q_w_pairs = zip(layer.get_quantizers(), layer.get_weights())
      for _, (quantizer, weight) in enumerate(q_w_pairs):
        qweight = K.eval(quantizer(weight))
        print("quantized weight")
        print(qweight)
  except AttributeError:
    print("warning, the weight is not quantized in the layer %s", layer.name)

@nunescoelho it could be helpful that we can document the process.

from qkeras.

nunescoelho avatar nunescoelho commented on May 7, 2024 1

Remember that QKeras layers only change the behavior of the forward pass (that's the straight through estimator), so look at the functions:

  • model_save_quantized_weights
  • model_quantize

In particular, model_quantize at end transfers the weights from the original keras model to the quantized model (optionally).

And remember as a rule of thumb, every arithmetic operation needs to be followed by a quantizer if you want the QKeras model to mimic any implementation.

from qkeras.

asti205 avatar asti205 commented on May 7, 2024

I was also applying QKeras to a pretrained model, but unfortunately my validation accuracy was very low after using model_quantize.
Is Retraining necessary, so can QKeras only be used for quantization-aware training? Because it seems to me that loading a pretrained model and quantizing it (post-training quantization) does not work.
And is there any documentation on using QKeras for pretrained models so far?

Best regards and many thanks,
asti205

from qkeras.

zhuangh avatar zhuangh commented on May 7, 2024

@asti205 retraining is necessary or you can try to train from qkeras directly by modifying your keras model into qkeras version.

from qkeras.

nunescoelho avatar nunescoelho commented on May 7, 2024

from qkeras.

asti205 avatar asti205 commented on May 7, 2024

Hello @nunescoelho ,

thank you for the explanation, but that is clear to me :)

The reason why I was asking is, that I could get a validation accuracy that was magnitudes higher using the TFLite converter. So I was explicitly interested in post-training quantization. However, I could also get a much higher accuracy with an own quantizer that is approximating the TFLite-converter behaviour.

Also, I think it is not the best option to set the post-comma bitwidths fixed, it is better to quantize adaptively depending on the actual weights. At least that is what I found out from the analysis of the TFLite Converter.

Best regards,
asti205

from qkeras.

Sejudyblues avatar Sejudyblues commented on May 7, 2024

hey, i want to know. in our win10, i ues pip install qkeras error.i want to install qkeras package what should i do ?

from qkeras.

zhuangh avatar zhuangh commented on May 7, 2024

Hi @Sejudyblues thank you for the question!

I assume you could find the package via pip install qkeras, right?

Could you try to clone the repo and use python setup.py install to install this package? BTW, we have not tried in win10.

from qkeras.

zhuangh avatar zhuangh commented on May 7, 2024

Since it has been quite for a while, I closed this issue. Feel free to reopen it.

from qkeras.

Xyfuture avatar Xyfuture commented on May 7, 2024

Hi, I looking for a tool for post-training inference with custom option(like bit-width). I use tensorflow and keras for my project, I trained a model and pruned it with tfmot, then I want to quantize it to a full integer model. Since retraining will change the sparsity of the model, I don't want to use quantization-aware-training. The other way is tf-lite, but it doesn't support quantize with custom option. Finally, I look for this project. Can I use this project to solve my problem? And I learned that full integer quantization needs represent dataset to learn some thing for activation quantization, does any functions in this project can do that? Or can I change source code to eliminate back-propagation in training and just use training to get parameter for activation quantization.

Thanks!

from qkeras.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.