Git Product home page Git Product logo

Comments (9)

bjacob avatar bjacob commented on May 4, 2024 3

The result scale and zero_point are not to be inferred from the inputs scale and zero point, that 's why neither our example code nor paper give formulas for that. There is no formula for that.

Instead, the quantization parameters of the result must be given by the user.

In a typical quantized neural network application, as in our paper, it is the training process that will record the min-max used for each matrix, including for the result matrix. The quantization and inference process will then use that pre-recorded min-max to quantize the result matrix.

from gemmlowp.

bjacob avatar bjacob commented on May 4, 2024

maybe out paper gives more context.
https://arxiv.org/abs/1712.05877

from gemmlowp.

haoyan01 avatar haoyan01 commented on May 4, 2024

@bjacob
Hi Benoit,
I read the paper you mentioned, but I still have the same question.

result_quantized_value = result_zero_point + (lhs_scale * rhs_scale / result_scale) * Sum_over_i( (lhs_quantized_value[i] - lhs_zero_point) * (rhs_quantized_value[i] - rhs_zero_point) ) (5)

The above equation is the basic scheme to calculate the quantized matrix multiplication. Since the input matrices are given, lhs_scale*rhs_scale, and Sum_over parts are easy to compute. But how to calculate result_scale and result_zero_point is not well described in both paper and gemmlowp documents.
Assume the result quantized value has 8 bits, my guess is

255 = result_quantized_value_max = result_zero_point + (lhs_scale * rhs_scale / result_scale) *Sum_over_i_max (a)

and

0 = result_quantized_value_min = result_zero_point + (lhs_scale * rhs_scale / result_scale) *Sum_over_i_min (b)

(a) -(b), we can get:

255 = (lhs_scale * rhs_scale / result_scale) *(Sum_over_i_max - Sum_over_i_min) (c)

Then,

result_scale = (lhs_scale * rhs_scale / 255) *(Sum_over_i_max - Sum_over_i_min)

Since Sum_over_i_max and Sum_over_i_min can be calculated, the result_scale can be got from the above equation. Is it correct and is it the way you used for calculating the result_scale and result_zero_point? Thank you so much.

from gemmlowp.

haoyan01 avatar haoyan01 commented on May 4, 2024

@bjacob
Thanks Benoit, is there any pretrained quantized model such as mobilenet that contains scales and zeropoints?

from gemmlowp.

bjacob avatar bjacob commented on May 4, 2024

I think there is, explore around
https://www.tensorflow.org/mobile/tflite/
and maybe ask on the issue tracker there if it's not obvious.

from gemmlowp.

sxsxsx avatar sxsxsx commented on May 4, 2024

@bjacob hello
From your paper https://arxiv.org/abs/1712.05877 I get that
During the training with simulated quantization, you only quantized the weights and activations, so we can get the corresponding scale and zero_point

(1)could you tell me how to get the result scale and zero_point during training process?
Is it right that to inference the model without to be quantized and collect [a; b] ranges about the result and deal with it just like deal the activations during the Training with simulated quantization?

you said that "The quantization and inference process will then use that pre-recorded min-max to quantize the result matrix."
(2)How to ensure that the quantized model with pre-recorded min-max has generalization ability?

thanks a lot, good luck to you @bjacob

from gemmlowp.

bjacob avatar bjacob commented on May 4, 2024

Redirecting these questions to @skligys who wrote Section 3 of this paper on training and is generally the training expert :-)

from gemmlowp.

zyc4me avatar zyc4me commented on May 4, 2024

same question,i have trained a quantized model in tf object object API, but when i get the global variables in the ".ckpt", i only found the weight_min/max and the min/max after relu6 (0 /5.9997) ,there is not output min/max of conv ,why?
the name of min/max tensor like that:

FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/min:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/max:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/min/biased:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/min/local_step:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/max/biased:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/max/local_step:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/weights_quant/min:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/weights_quant/max:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/act_quant/min:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/act_quant/max:0

from gemmlowp.

bjacob avatar bjacob commented on May 4, 2024

from gemmlowp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.