Comments (5)
Not necessary. brevitas.nn layers always return values in dequantized range.
from brevitas.
Ok. Also, would you advise using the PyTorch's functional version of sigmoid/log_sigmoid, or that of brevitas? My understanding is that PyTorch's functional version of sigmoid operates on FP32 which can give better resolution versus that of brevitas. Hence, it might be better to use PyTorch's sigmoid if we are not too concerned about the cost of that particular sigmoid function.
from brevitas.
It really depends on your specific use case. What QuantSigmoid does is take a FP32 input in, apply a sigmoid activation function, and the quantize the output according to your specification.
from brevitas.
Ok. I am particularly talking about the use case of training for classification. In that case, I believe F.sigmoid will be a better choice given that I'd want to use F.nll_loss downstream for training and it'll give me better resolution.
Edit: Additionally, I couldn't see in the codebase as to where de-quantization really happens at the layer output. Does it explicitly require us to set the following properties true for a layer as shown below:
self.fc1 = qnn.QuantLinear(...,compute_output_scale= True, compute_output_bit_width = True, return_quant_tensor = True)
?
The reason why I am digging deeper into this is that I am getting pretty good results with even very low values of bit_width
such as 2 or 4, while setting all the properties of layers to default and only tweaking bit_width
.
from brevitas.
If your accuracy is unreasonably high, it might that you have quantization disabled. The default behavior is to have quantization disabled, i.e. a QuantConv2d behaves by default as a Conv2d.. To enable integer quantization you need to set weight_quant_type = QuantType.INT . I understand this might be confusing, so i'll force the user to specify the QuantType in a later update.
Regarding dequantization, it is not exposed to the user as a separate layer. It's performed here and is called as part of every quantized layer.
What those flags do is to compute explicitly the scale factor of the output accumulator, as well as its maximum bit width, and return them as quantized tensor, which is a named tuple composed of (output_tensor, output_scale_factor, output_bit_width). The relationship between those values is that output_tensor/output_scale_factors are integer values (when you have bias disabled) that can be represented with output_bit_width bits. Enabling them is not required in general.
This sort of information is gonna go in the documentation as soon the API stabilize enough.
from brevitas.
Related Issues (20)
- Bug when registering custom ONNX Op with context manager export
- Add OverOutputFeature to Enum
- Make notebook examples deterministic HOT 2
- Explicitly print output of asserts in notebook examples
- QuantTensor always valid
- Notebook Thumbnail in documentation
- Test for GPXQ classes HOT 2
- Create GitHub Issue Template HOT 1
- Create GitHub PR Template HOT 2
- Create Brevitas Contributor Doc v1 HOT 3
- QuantReLU scale factors HOT 2
- Expand QuantAct notebook with scaling per output channel examples HOT 1
- Speech to text: Create en empty json file HOT 2
- Change how the automatic tests are triggered HOT 1
- ONNX export of integer weights with large models HOT 5
- PTQ Tutorial
- Question: Unsigned Quantization HOT 3
- Implement context-manager based export
- Missing Proxy tests
- Export ONNX QOperator HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from brevitas.