Comments (18)
I suppose this is the same issue as in #516 (comment). So you are trying to compile the densenet and getting this error after you replaced the global average pooling by the average pooling? Could you share the changes you did to the densenet in your local file so that we can reproduce exactly the error you are getting?
from concrete-ml.
I replaced the globalaveragepool line with
nn.AvgPool2d(kernel_size=7, stride=1, padding=0)
The pytorch model trained successfully. Also it was exported as onnx and the above issue arise when compiling to concrete.
from concrete-ml.
Can you also share what's your input shape?
from concrete-ml.
Input is 3x 150 x 150 image.
from concrete-ml.
If I put a (1, 3, 150, 150)
input in the densenet121, I get a tensor of dimension (1, 1024, 4, 4)
before this line
out = F.adaptive_avg_pool2d(out, (1, 1))
But the adaptive_avg_pool2d
wants the output to be (1, 1024, 1, 1)
. I am not sure how nn.AvgPool2d(kernel_size=7, stride=1, padding=0)
would work.
Setting a out = F.avg_pool2d(out, kernel_size=(4, 4))
should be good in your context.
from concrete-ml.
Thanks for the bug report.
It's hard to tell where the error comes from. The line you show uses numpy functions so it should return numpy.float instead of python float. Could you print the values and types of stats.rmax, stats.rmin, options.n_bits, self.offset just before that line?
Alternatively could you give code that reproduces the issue ?
from concrete-ml.
My bad. I used the same changes. Please see image.
But the above error is still there.
At line 527 of quantizer.py in concrete ml, it tries to use astype on a float. Which is throwing this error.
Your update on this is highly appreciated.
from concrete-ml.
Sorry but that's not enough to help you debug this. Could you modify the code of quantizer.py as described in the previous comment and give the output ?
from concrete-ml.
Hi, Thank you for your reponse.
Please see attached values and types.
Also I commented astype casting and printed self.scale.
- It seems that error is due to casting float into a float again and float has no astype method.
- Also when I commnet this casting and run it. It now throws error on line 776.
- Both screenshots are attached.
from concrete-ml.
from concrete-ml.
from concrete-ml.
Would it be possible for you to setup a GitHub repo with the code? It's not easy for us to reproduce with screenshots.
That being said, I see here that it's trying to quantize to 1024 bits. This is certainly going to be a problem. Are you setting the n_bits
to 1024 ?
from concrete-ml.
The float
value that is casted should never be float but numpy.float. Something is causing it to be of the wrong datatype.
Please use n_bits=8
, which should give a typical scale value instead of 10E-309.
from concrete-ml.
Hi @andrei-stoian-zama , I really appreciate your help. I have added code to github repo. Here is the link.
https://github.com/malickKhurram/COVID-19-Detection
- Please use this notebook. Links to datasets used are given inside the notebook.
- I have exported the model using onnx.
- I skipped the compile_onnx_model code. You can set your bits and other parameters and compile this mode.
I really need to compile this model to be concrete ml compatible. Your help in this regard is required.
from concrete-ml.
One first observation is that the compile_onnx_model
code that you show uses an input set of 5000 examples of shape (1024,)
. However, the network expects (1,150,150)
. Are you sure the model in the notebook is the right one ?
Furthermore doing:
from concrete.ml.torch.compile import compile_onnx_model
inputset = np.random.uniform(-100, 100, (50, 1024))
compile_onnx_model(onnx_model, inputset, n_bits=8)
raises the GlobalAveragePool
error, can you show how do you modify your model to change that to standard average pool ?
I tried exporting only the model.features
submodel like this:
import onnx
# Input to the model
x = torch.randn(1, 3, 150, 150, requires_grad=True)
torch.onnx.export(model.features, x, "my_image_classifier.onnx",export_params=True)
onnx_model = onnx.load("my_image_classifier.onnx")
onnx.checker.check_model(onnx_model)
from concrete.ml.torch.compile import compile_onnx_model
inputset = np.random.uniform(-100, 100, (50, 3, 150, 150))
compile_onnx_model(onnx_model, inputset, n_bits=8)
and it raises AssertionError: All inputs must have the same scale and zero_point to be concatenated.
, which is normal as Concrete ML does not yet support concatenation in post-training quantization import (compile_torch/compile_onnx
).
Can you please provide some minimal code that reproduces the float
object erorr ?
from concrete-ml.
Thank you for your help.
I am no more getting float error after using the exact same parameters as you have used.
I have 2 questions and would appreciate your help.
- Is there a fix or way around to this AssertionError?
- Can you please share some image classification based model which I can compile to concrete ml.
from concrete-ml.
@andrei-stoian-zama
@jfrery
Can you please suggest on above thread.
from concrete-ml.
I will close this thread as there is one about the missing GlobalAveragePooling.
For image classification please see the cifar example: https://github.com/zama-ai/concrete-ml/tree/main/use_case_examples/cifar/cifar_brevitas_finetuning
from concrete-ml.
Related Issues (20)
- [Question] Discord link in explanation HOT 2
- High accuracy variance during the training with SGDClassifier HOT 1
- Feature Request : Implement LogSoftmax, Softmax, ReduceMax HOT 3
- Performance Issues HOT 1
- Two consecutive Unsqueeze operations in QAT model throws error at compilation time HOT 2
- LLVM symbolizer error with LogisticRegression example HOT 15
- [Question] What HE algorithm is used? HOT 6
- [Question] AssertionError: Values must be float if value_is_float is set to True, got int64: [1] HOT 3
- AssertionError: Values must be float if value_is_float is set to True, got int64: [[[[102 14 188 ... 85 205 46] HOT 3
- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! HOT 7
- Python 3.12 HOT 1
- Adding encrypted training for other ML models and DL models HOT 5
- quantized_module.forward() occured an error in "execute" mode HOT 8
- Feature request: Support Unfold torch operator HOT 10
- LLVM symbolizer error when running FHE in 'execute' mode HOT 7
- [Question] How does ReLU work in the new NN example HOT 5
- Feature request: support more padding method HOT 1
- Accuracy of XGBClassifier of disable mode HOT 2
- GPU acceleration in concrete-ml ? HOT 3
- Making a fast satellite image classification use case [was: Some questions about GPU acceleration] HOT 12
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from concrete-ml.