Git Product home page Git Product logo

concrete-ml's Introduction

Zama Concrete ML


📒 Documentation | 💛 Community support | 📚 FHE resources by Zama

SLSA 3

About

What is Concrete ML

Concrete ML is a Privacy-Preserving Machine Learning (PPML) open-source set of tools built on top of Concrete by Zama.

It simplifies the use of fully homomorphic encryption (FHE) for data scientists so that they can automatically turn machine learning models into their homomorphic equivalents, and use them without knowledge of cryptography.

Concrete ML is designed with ease of use in mind. Data scientists can use models with APIs that are close to the frameworks they already know well, while additional options to those models allow them to run inference or training on encrypted data with FHE. The Concrete ML model classes are similar to those in scikit-learn and it is also possible to convert PyTorch models to FHE.

Main features

  • Built-in models: Ready-to-use FHE-friendly models with a user interface that is equivalent to their the scikit-learn and XGBoost counterparts
  • Customs models: Concrete ML supports models that can use quantization-aware training. These are developed by the user using PyTorch or keras/tensorflow and are imported into Concrete ML through ONNX

Learn more about Concrete ML features in the documentation.

Use cases

By leveraging FHE, Concrete ML can unlock a myriad of new use cases for machine learning, such as enabling secure and private data collaboration, protecting sensitive data while still allowing for analysis, and facilitating machine learning on data-sets that are subject to strict data privacy regulations, for instance

  • Healthcare data analysis: Improve patient care while maintaining privacy by allowing secure, confidential data sharing between healthcare providers.
  • Financial services: Facilitate secure financial data analysis for risk management and fraud detection, keeping client information encrypted and safe.
  • Ad campaign tracking: Create targeted advertising and campaign insights in a post-cookie era, ensuring user privacy through encrypted data analysis.
  • Industries: Enable predictive maintenance in the cloud while keeping sensitive data confidential, enhancing efficiency and data security.
  • Biometrics: Give the ability to create user authentication applications without having to reveal their identities.
  • Government: Enable governments to create digitized versions of their services without having to trust cloud providers.

See more use cases in the list of demos.

Table of Contents

Getting Started

Installation

Depending on your OS, Concrete ML may be installed with Docker or with pip:

OS / HW Available on Docker Available on pip
Linux Yes Yes
Windows Yes No
Windows Subsystem for Linux Yes Yes
macOS 11+ (Intel) Yes Yes
macOS 11+ (Apple Silicon: M1, M2, etc.) Coming soon Yes

Note: Concrete ML only supports Python 3.8, 3.9 and 3.10. Concrete ML can be installed on Kaggle (see this question on the community for more details) and on Google Colab.

Docker

To install with Docker, pull the concrete-ml image as follows: docker pull zamafhe/concrete-ml:latest

Pip

To install Concrete ML from PyPi, run the following:

pip install -U pip wheel setuptools
pip install concrete-ml

Find more detailed installation instructions in this part of the documentation

↑ Back to top

A simple example

Here is a simple example which is very close to scikit-learn for a logistic regression :

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from concrete.ml.sklearn import LogisticRegression

# Lets create a synthetic data-set
x, y = make_classification(n_samples=100, class_sep=2, n_features=30, random_state=42)

# Split the data-set into a train and test set
X_train, X_test, y_train, y_test = train_test_split(
    x, y, test_size=0.2, random_state=42
)

# Now we train in the clear and quantize the weights
model = LogisticRegression(n_bits=8)
model.fit(X_train, y_train)

# We can simulate the predictions in the clear
y_pred_clear = model.predict(X_test)

# We then compile on a representative set
model.compile(X_train)

# Finally we run the inference on encrypted inputs !
y_pred_fhe = model.predict(X_test, fhe="execute")

print("In clear  :", y_pred_clear)
print("In FHE    :", y_pred_fhe)
print(f"Similarity: {int((y_pred_fhe == y_pred_clear).mean()*100)}%")

# Output:
    # In clear  : [0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 0 0]
    # In FHE    : [0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 0 0]
    # Similarity: 100%



It is also possible to call encryption, model prediction, and decryption functions separately as follows. Executing these steps separately is equivalent to calling predict_proba on the model instance.

# Predict probability for a single example
y_proba_fhe = model.predict_proba(X_test[[0]], fhe="execute")

# Quantize an original float input
q_input = model.quantize_input(X_test[[0]])

# Encrypt the input
q_input_enc = model.fhe_circuit.encrypt(q_input)

# Execute the linear product in FHE
q_y_enc = model.fhe_circuit.run(q_input_enc)

# Decrypt the result (integer)
q_y = model.fhe_circuit.decrypt(q_y_enc)

# De-quantize and post-process the result
y0 = model.post_processing(model.dequantize_output(q_y))

print("Probability with `predict_proba`: ", y_proba_fhe)
print("Probability with encrypt/run/decrypt calls: ", y0)

This example is explained in more detail in the linear model documentation.

Concrete ML built-in models have APIs that are almost identical to their scikit-learn counterparts. It is also possible to convert PyTorch networks to FHE with the Concrete ML conversion APIs. Please refer to the linear models, tree-based models and neural networks documentation for more examples, showing the scikit-learn-like API of the built-in models.

↑ Back to top

Resources

Demos

Live demos on Hugging Face

  • Credit card approval: Predicting credit scoring card approval application in which sensitive data can be shared and analyzed without exposing the actual information to neither the three parties involved, nor the server processing it.
    • Check the code here
  • Sentiment analysis with transformers: predicting if an encrypted tweet / short message is positive, negative or neutral, using FHE.
  • Health diagnosis: giving a diagnosis using FHE to preserve the privacy of the patient based on a patient's symptoms, history and other health factors.
    • Check the code here
  • Encrypted image filtering : filtering encrypted images by applying filters such as black-and-white, ridge detection, or your own filter.
    • Check the code here

Other demos

  • Encrypted Large Language Model: converting a user-defined part of a Large Language Model for encrypted text generation. This demo shows the trade-off between quantization and accuracy for text generation and shows how to run the model in FHE.
  • Private inference for federated learned models: private training of a Logistic Regression model and then importing the model into Concrete ML and performing encrypted prediction.
  • Titanic: solving the Kaggle Titanic competition. Implemented with XGBoost from Concrete ML, this example comes as a companion of the Kaggle notebook, and was the subject of a blogpost in KDnuggets.
  • CIFAR10 FHE-friendly model with Brevitas: training a VGG9 FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~4 minutes per image and shows an accuracy of 88.7%.
  • CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach: series of three notebooks, that convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%.

If you have built awesome projects using Concrete ML, please let us know and we will be happy to showcase them here!

Tutorials

Explore more useful resources in Awesome Zama repo

Documentation

Full, comprehensive documentation is available here: https://docs.zama.ai/concrete-ml.

↑ Back to top

Working with Concrete ML

Citations

To cite Concrete ML in academic papers, please use the following entry:

@Misc{ConcreteML,
  title={Concrete {ML}: a Privacy-Preserving Machine Learning Library using Fully Homomorphic Encryption for Data Scientists},
  author={Zama},
  year={2022},
  note={\url{https://github.com/zama-ai/concrete-ml}},
}

Contributing

To contribute to Concrete ML, please refer to this section of the documentation.

License

This software is distributed under the BSD-3-Clause-Clear license. Read this for more details.

FAQ

Is Zama’s technology free to use?

Zama’s libraries are free to use under the BSD 3-Clause Clear license only for development, research, prototyping, and experimentation purposes. However, for any commercial use of Zama's open source code, companies must purchase Zama’s commercial patent license.

All our work is open source and we strive for full transparency about Zama's IP strategy. To know more about what this means for Zama product users, read about how we monetize our open source products in this blog post.

What do I need to do if I want to use Zama’s technology for commercial purposes?

To commercially use Zama’s technology you need to be granted Zama’s patent license. Please contact us at [email protected] for more information.

Do you file IP on your technology?

Yes, all of Zama’s technologies are patented.

Can you customize a solution for my specific use case?

We are open to collaborating and advancing the FHE space with our partners. If you have specific needs, please email us at [email protected].

↑ Back to top

Support

Support

🌟 If you find this project helpful or interesting, please consider giving it a star on GitHub! Your support helps to grow the community and motivates further development.

↑ Back to top

concrete-ml's People

Contributors

amt42 avatar andrei-stoian-zama avatar aquint-zama avatar bcm-at-zama avatar bencrts avatar dependabot[bot] avatar fd0r avatar hugolb0 avatar icetdrinker avatar jfrery avatar jshul avatar kcelia avatar khoaguin avatar oboulant avatar robinstraub avatar romanbredehoft avatar rudy-6-4 avatar soonum avatar tguerand avatar thomas-quadratic avatar yuxizama avatar zaccherinij avatar zama-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

concrete-ml's Issues

Installation

Is it possible to install concrete-ml through Jupyter Notebook using the pip command on a Windows machine?

Quantized Unfold

Feature request

An implementation of the torch.nn.Unfold in a quantized way.

Motivation

The unfold operation requested is not done here with the torch._C.im2col but with the fhe_conv function to have lower compilation times with concrete-python afterwards.
Doing an unfold with for loops / indexing is very slow compilation wise as it generates a huge MLIR compared to a simple convolution with kernels of a single 1 and zeros

Certain ONNX Operators

Hi team, I've noticed that ONNX operators have been slowly added over previous releases this past year (kudos!).

There are two operators missing when attempting to convert a pretrained torch ResNet18 model via compile_torch_model:

  1. Expand
  2. ReduceL2

Are these two operators missing due to limitations on the FHE side or because they are not commonly used?
Thanks!

New Feature - Biclustering with CML

Hello.

Thank you for your great work!
I have a project to apply FHE on gene expression data for instance two of them

  1. Yeast expression matrix, based on Tavazoie et al. 8224 rows and 17 columns, 4 bytes for each element, with -1 indicating a missing value.
  2. Human expression matrix, based on Alizadeh et al. with 4026 rows and 96 columns, 4 bytes for each element, and 999 indicating a missing value.

and analyze encrypted input data with Biclustering algorithms like
Cheng and Church Algorithm .

**Biclustering can be performed with the module sklearn.cluster.bicluster ; more information can be found here.
**There are also wide range of Python libraries of biclustering algorithms like biclustlib.

I am very excited to contribute with Zama community to make this unsupervised machine learning algorithm run on concrete-ml and make gene expression data encrypted during process. I would be happy to provide more detailed information if needed.

Many thanks
Shokofeh

encrypted_sentiment_analysis version conflict

concrete v:4.0.0 conflicts with python3.9

ERROR: Ignored the following versions that require a different python version: 4.0.0 Requires-Python >=3.6,<3.9; 4.0.1 Requires-Python >=3.6,<3.9; 4.1.0 Requires-Python >=3.6,<3.9
ERROR: Could not find a version that satisfies the requirement concrete-compiler<0.20.0,>=0.19.0 (from concrete-numpy) (from versions: 0.1.1, 0.1.2, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.14.0, 0.15.0, 0.16.0)
ERROR: No matching distribution found for concrete-compiler<0.20.0,>=0.19.0

Feature Request: support of torch.nn.Unfold function

Feature request

I would like to request the support of the torch.nn.Unfold function, or a numpy equivalent.

Motivation

I am trying to do some convolutions that do different operations than regular convolutions on each kernels, therefore I would need to use a way of reorganising the images into each kernel.

I am doing it with numpy but it is quite inefficient:

def conv2D_unfold_deep(image, ratio, ksize=(1, 1), pad=0, stride=1):
    """Return the indices used in the 2D convolution"""

    nfilters, height, width = image.shape
    # arr = np.array(list(range(height * width))).reshape((image.shape))

    kernel_height, kernel_width = ksize

    output_height = int((height + 2 * pad - kernel_height) / stride) + 1
    output_width = int((width + 2 * pad - kernel_width) / stride) + 1

    out = fhe.zeros((nfilters//ratio, output_height, output_width, kernel_height * kernel_width*ratio))

    for f in range(0,nfilters,ratio):
        for h in range(output_height):
            for w in range(output_width):
                h_start = h * stride
                h_end = h_start + kernel_height
                w_start = w * stride
                w_end = w_start + kernel_width
                # Get the receptive_field
                # pad image # shape (B, 8, 10, 10)
                receptive_field = image[f:f+ratio, h_start:h_end, w_start:w_end].reshape(
                    (kernel_height * kernel_width * ratio))  # shape(B, 6) binaire
                # transform input 0/1 into int between [0 ; 2**n-1]
                out[f//ratio, h, w, :] = receptive_field  # output_var_unfold

    return out.astype(int)

which is very inefficient because of the three for loops.
I could do it with indexing but it is not supported by concrete:

import numpy as np
from concrete import fhe

coordinates = np.arange(0,100).reshape((10,10))
indexes = conv2D_unfold_deep(coordinates, ratio=1)
x = np.random.randint(0,2,(100))

@fhe.compiler({"x":"encrypted"})
def f(x):
	x = x[indexes]
	return x

f(x)

returns :

ValueError: Tracer<output=EncryptedTensor<uint1, shape=(100,)>> cannot be indexed with [[[[ 0]    [ 1]    [ 2]    [ 3]    [ 4]    [ 5]    [ 6]    [ 7]    [ 8]    [ 9]]    [[10]    [11]    [12]    [13]    [14]    [15]    [16]    [17]    [18]    [19]]    [[20]    [21]    [22]    [23]    [24]    [25]    [26]    [27]    [28]    [29]]    [[30]    [31]    [32]    [33]    [34]    [35]    [36]    [37]    [38]    [39]]    [[40]    [41]    [42]    [43]    [44]    [45]    [46]    [47]    [48]    [49]]    [[50]    [51]    [52]    [53]    [54]    [55]    [56]    [57]    [58]    [59]]    [[60]    [61]    [62]    [63]    [64]    [65]    [66]    [67]    [68]    [69]]    [[70]    [71]    [72]    [73]    [74]    [75]    [76]    [77]    [78]    [79]]    [[80]    [81]    [82]    [83]    [84]    [85]    [86]    [87]    [88]    [89]]    [[90]    [91]    [92]    [93]    [94]    [95]    [96]    [97]    [98]    [99]]]]

How to quantify BatchNorm2d layer?

When I compile a model with BatchNorm2d layers, the following error occurs

The following tensors were expected to be quantized, but the values found during calibration do not appear to be quantized.
image
image
image

Segfault on v0.6.0/docker/Apple M1/M2

Summary

There is a crash issue (in docker) which is being currently investigated in one of our main dependancies: Concrete-Numpy. This issue happens on Apple Silicon (M1/M2) with docker. We're working on it.

Description

  • versions affected: Concrete-Numpy >= 0.9.0, ie Concrete-ML >= 0.6.0
  • config (optional: HW, OS): macOS, M1/M2 chips
  • workaround (optional): use an older version for now (ie, Concrete-ML 0.5.1)

ImportError: cannot import name 'enc_split' from 'utils' in LLM case.

Summary

When I attempt to run the LLM case, I the prosed error is ImportError: cannot import name 'enc_split' from 'utils' in LLM case.
The error appears in file quant_framework.py
and sequence that from utils import enc_split, max_fhe_relu, simple_slice

In detail, the error is :

ImportError Traceback (most recent call last)
Cell In[1], line 5
3 import numpy as np
4 from concrete.fhe.tracing import Tracer
----> 5 from quant_framework import DualArray, Quantizer
7 from concrete import fhe

File /test_for_docker_jupyter/quant_framework.py:7
5 import numpy as np
6 from concrete.fhe.tracing import Tracer
----> 7 from utils import enc_split, max_fhe_relu, simple_slice
9 EPSILON = 2**-11
11 import numpy as np

ImportError: cannot import name 'enc_split' from 'utils' (/opt/conda/envs/py39/lib/python3.9/site-packages/utils/init.py)

Description

  • versions affected: concrete-ml 1.1
  • python version: 3.9
  • config (optional: HW, OS): Docker, Linux container

Unable to run example on MacOS

Summary

I'm unable to compile a model in my MacOS environment.

Description

Steps to reproduce:

  • Install using
python -m venv venv
cd venv && source venv/bin/activate
pip install -U pip wheel setuptools
pip install concrete-ml
  • Copy/paste the example from here.
  • Use python example.py and get this error:
Traceback (most recent call last):
  File "example.py", line 21, in <module>
    model.compile(X_train)
  File "/Users/rmagner/Documents/Experiments/Concrete/venv/lib/python3.8/site-packages/concrete/ml/sklearn/base.py", line 1101, in compile
    circuit = self.quantized_module_.compile(
  File "/Users/rmagner/Documents/Experiments/Concrete/venv/lib/python3.8/site-packages/concrete/ml/quantization/quantized_module.py", line 358, in compile
    self.forward_fhe = compiler.compile(
  File "/Users/rmagner/Documents/Experiments/Concrete/venv/lib/python3.8/site-packages/concrete/numpy/compilation/compiler.py", line 373, in compile
    mlir = GraphConverter.convert(self.graph, virtual=self.configuration.virtual)
  File "/Users/rmagner/Documents/Experiments/Concrete/venv/lib/python3.8/site-packages/concrete/numpy/mlir/graph_converter.py", line 401, in convert
    GraphConverter._update_bit_widths(graph)
  File "/Users/rmagner/Documents/Experiments/Concrete/venv/lib/python3.8/site-packages/concrete/numpy/mlir/graph_converter.py", line 226, in _update_bit_widths
    raise RuntimeError(
RuntimeError: Function you are trying to compile cannot be converted to MLIR:

%0 = [[ -3] [-1 ... -1] [  4]]        # ClearTensor<int8, shape=(30, 1)>
%1 = _input_0                         # EncryptedTensor<uint8, shape=(1, 30)>
%2 = -131                             # ClearScalar<int9>
%3 = add(%1, %2)                      # EncryptedTensor<int9, shape=(1, 30)>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only up to 8-bit integers are supported
%4 = subgraph(%3)                     # EncryptedTensor<int8, shape=(1, 30)>
%5 = matmul(%4, %0)                   # EncryptedTensor<int16, shape=(1, 1)>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only up to 8-bit integers are supported
%6 = subgraph(%5)                     # EncryptedTensor<uint8, shape=(1, 1)>
return %6

Subgraphs:

    %4 = subgraph(%3):

         %0 = -128                           # ClearScalar<int8>
         %1 = 127                            # ClearScalar<uint7>
         %2 = 3                              # ClearScalar<uint2>
         %3 = 0.02919340787200249            # ClearScalar<float64>
         %4 = 0.02919340787200249            # ClearScalar<float64>
         %5 = input                          # EncryptedTensor<uint2, shape=(1, 30)>
         %6 = multiply(%4, %5)               # EncryptedTensor<float64, shape=(1, 30)>
         %7 = true_divide(%6, %3)            # EncryptedTensor<float64, shape=(1, 30)>
         %8 = add(%7, %2)                    # EncryptedTensor<float64, shape=(1, 30)>
         %9 = rint(%8)                       # EncryptedTensor<float64, shape=(1, 30)>
        %10 = clip(%9, %0, %1)               # EncryptedTensor<float64, shape=(1, 30)>
        %11 = astype(%10, dtype=int_)        # EncryptedTensor<uint1, shape=(1, 30)>
        return %11

    %6 = subgraph(%5):

         %0 = 0                                 # ClearScalar<uint1>
         %1 = 255                               # ClearScalar<uint8>
         %2 = 0                                 # ClearScalar<uint1>
         %3 = 0.003919792729613519              # ClearScalar<float64>
         %4 = 1.0                               # ClearScalar<float64>
         %5 = 1.0                               # ClearScalar<float64>
         %6 = [0.11045581]                      # ClearTensor<float32, shape=(1,)>
         %7 = 0.0003101464749011019             # ClearScalar<float64>
         %8 = [[30]]                            # ClearTensor<uint5, shape=(1, 1)>
         %9 = 0                                 # ClearScalar<uint1>
        %10 = input                             # EncryptedTensor<uint5, shape=(1, 1)>
        %11 = astype(%10, dtype=float32)        # EncryptedTensor<float32, shape=(1, 1)>
        %12 = add(%11, %9)                      # EncryptedTensor<float32, shape=(1, 1)>
        %13 = add(%12, %8)                      # EncryptedTensor<float64, shape=(1, 1)>
        %14 = multiply(%7, %13)                 # EncryptedTensor<float64, shape=(1, 1)>
        %15 = add(%14, %6)                      # EncryptedTensor<float64, shape=(1, 1)>
        %16 = negative(%15)                     # EncryptedTensor<float64, shape=(1, 1)>
        %17 = exp(%16)                          # EncryptedTensor<float64, shape=(1, 1)>
        %18 = add(%5, %17)                      # EncryptedTensor<float64, shape=(1, 1)>
        %19 = true_divide(%4, %18)              # EncryptedTensor<float64, shape=(1, 1)>
        %20 = true_divide(%19, %3)              # EncryptedTensor<float64, shape=(1, 1)>
        %21 = add(%20, %2)                      # EncryptedTensor<float64, shape=(1, 1)>
        %22 = rint(%21)                         # EncryptedTensor<float64, shape=(1, 1)>
        %23 = clip(%22, %0, %1)                 # EncryptedTensor<float64, shape=(1, 1)>
        %24 = astype(%23, dtype=int_)           # EncryptedTensor<uint1, shape=(1, 1)>
        return %24

Note I've also tried running in JupyterLab and get the same error when hitting the model.compile step. Everything else up to that works as expected.


The documentation is a little conflicting. This page says this method should work just fine on MacOS, but this page says glibc is required, which isn't supported on MacOS as far as I'm aware. So is anything further required to get this running on MacOS, or is something else misconfigured?

Here are some environment stats:

$ ~ gcc --version
Apple clang version 14.0.0 (clang-1400.0.29.202)
Target: x86_64-apple-darwin21.6.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin

AttributeError: 'float' object has not attribute 'astype'

1

Summary

I have created a image classification model in PyTorch and trained on unencrypted data. After that the model is exported using ONNX and import back and checked.
Now when the model is compiled with "compile_onnx_model", it gives following error.

Description

  • versions affected:
  • python version: 3.9.13
  • config (optional: HW, OS):
  • workaround (optional): if you’ve a way to workaround the issue
  • proposed fix (optional): if you’ve a way to fix the issue

Step by step procedure someone should follow to trigger the bug:

minimal POC to trigger the bug

print("Minimal POC to reproduce the bug")

FHE models training

Hi,

I am doing research on FHE with neural networks, and I am wondering whether you have tried or plan on doing model training with homomorphically encrypted data. If there are such capacities of the library, could you point me to the tutorial or give me some idea on how would one go about that?

Thank you and great work on the library so far!

Kernel crashes when I run the following code

Summary

I ran the following code in jupyter notebook related to Linear regression tutorial with concrete-ml and kernel crashes each time I run this code.
fhe_circuit.client.keygen(force=False)

Description

  • versions affected:0.6.1
  • python version:3.9.16
  • OS:Linux mint una

Step by step procedure someone should follow to trigger the bug:

just run all the cells in jupyter notebook of linear regression with concrete-ml.

Unknown attributes: count_include_pad when compiling with compile_torch_model

Summary

When develop a custom neural network and try to compile it using MNIST Dataset through compile_torch_model, I get an assertion error :
"AssertionError: Got the following unknown attributes: count_include_pad. Accepted attributes: ceil_mode, kernel_shape, pads, strides"

I was expecting compilation and generation of the circuit which I will then use to train my model.

  • versions affected:
  • python version: 3.10.12
  • config : x86_64
  • workaround (optional): remove the pooling layers but you are likely to run into other issues and above all, affect the accuracy of the network.

Step by step procedure someone should follow to trigger the bug:

Enhance the documentation for encrypted training

Feature request

Disclaimer: I am very new to concrete-ml and FHE in general.

I am referring to the existing example for encrypted training. From my understanding the example works entirely on plaintext data. Or at least I would expect that the data is encrypted before it is fed into model.fit. In case I misunderstand the example I would be happy if you can clarify it for me. In case my understanding is correct, could you enhance the example to showcase the full workflow.

Motivation

I am very new to concrete ml and it is hard to understand when data is encrypted in the provided example for Encrypted training.

Docker image is not working on Apple M1/M2 chips

Summary

The version provided through the docker image is not working on Apple M1 and M2 chips

Description

minimal POC to trigger the bug

On a M1 or M2 machine:

# Install
docker pull zamafhe/concrete-ml:latest

# Run the with:
docker run --rm -it -p 8888:8888 -v ~/host_folder:/data zamafhe/concrete-ml

Python code: ( example from the readme)

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from concrete.ml.sklearn import LogisticRegression


# Lets create a synthetic data-set
x, y = make_classification(n_samples=100, class_sep=2, n_features=30, random_state=42)

# Split the data-set into a train and test set
X_train, X_test, y_train, y_test = train_test_split(
    x, y, test_size=0.2, random_state=42
)
# Now we train in the clear and quantize the weights
model = LogisticRegression(n_bits=8)
model.fit(X_train, y_train)

# We then compile on a representative set
model.compile(X_train) # --> This line kills the kernel and restart the notebook

Can't emit artifacts

Summary

What happened/what you expected to happen?
I wanted to check if the model gets compiled on my computer via concrete-ml library, but I faced following issue.

RuntimeError: Can't emit artifacts: Command failed:ld --shared -o /tmp/tmpgrs3bzdc/sharedlib.so /tmp/tmpgrs3bzdc.module-0.mlir.o /home/mahmoud/miniconda3/envs/concrete/lib/python3.10/site-packages/concrete_python.libs/libConcretelangRuntime-636b2a26.so -rpath=/home/mahmoud/miniconda3/envs/concrete/lib/python3.10/site-packages/concrete_python.libs --disable-new-dtags 2>&1
Code:32512
sh: 1: ld: not found

Description

  • versions affected: 1.3
  • python version: 3.10.13
  • config (optional: HW, OS): Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz, Ubuntu 22.04
  • proposed fix (optional): I assume it maybe because of my CPU model, it is 4th generation of Intel CPUs.

Step by step procedure someone should follow to trigger the bug:

minimal POC to trigger the bug

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_regression
from concrete.ml.sklearn import LinearRegression

X, y = make_regression(
    n_samples=200, n_features=8, n_targets=1, bias=5.0, noise=30.0, random_state=42
)
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

concrete_model = LinearRegression()
concrete_model.fit(x_train, y_train)
circuit = concrete_model.compile(x_train) # I get the error for this line of code
circuit.keygen()
y_pred = concrete_model.predict(x_test, execute_in_fhe=True)

Model dumping not working for Fully Connected Neural Network on MNIST Example

I am trying to run the fully connected neural network on MNIST example found at https://github.com/zama-ai/concrete-ml/blob/release/1.1.x/docs/advanced_examples/FullyConnectedNeuralNetworkOnMNIST.ipynb

I am not using the Jupyter Notebook but running the code locally by writing it in a .py file. The code is working fine and I am getting the expected result.

However, when I am trying to dump the model, either to a string using model.dumps() or a file using model.dump(f), I am getting the following error:

File "/home/user/.local/lib/python3.8/site-packages/concrete/ml/sklearn/qnn.py", line 566, in dump_dict
raise NotImplementedError(
NotImplementedError: Serializing a custom Callable object is not secure and is therefore disabled. Additionally, the serialization of skorch's different callback classes is not supported. Please set callbacks to 'disable'. Got None.

I also used these methods in the Logistic Regression example, found on the readme page, and they are working correctly as I am able to dump the model both into a string and a file.

As per the Serialization page, found at https://docs.zama.ai/concrete-ml/advanced-topics/serialization, this method should work for all built-in models so this should not be a problem caused by the selected model.

I am unable to understand why this is happening.

Feature request: support GlobalAveragePooling

Summary

I have created a image classification model in PyTorch and trained on unencrypted data. After that the model is exported using ONNX and import back and checked.
Now when the model is compiled with "compile_onnx_model", it gives following error.

valueError: The following ONNX operators are required to convert the pytorch model to numpy but are not currently implemented: GlobalAveragePool

Description

  • versions affected: v1.3.0
  • python version: 3.10.6
  • config (optional: HW, OS): Linux

ONNX Gather operator missing

I'm following the the decision tree tutorial https://docs.zama.ai/concrete-ml/stable/user/advanced_examples/DecisionTreeClassifier.html but I'm changing the dataset to another one, e.g., https://www.openml.org/search?type=data&status=active&tags.tag=uci&qualities.NumberOfClasses=%3D_2&id=1480

But I'm getting the following error

ValueError: The following ONNX operators are required to convert the torch model to numpy but are not currently implemented: Gather.
Available ONNX operators: Abs, Acos, Acosh, Add, Asin, Asinh, Atan, Atanh, Celu, Clip, Constant, Conv, Cos, Cosh, Div, Elu, Equal, Erf, Exp, Gemm, Greater, HardSigmoid, Identity, LeakyRelu, Less, Log, MatMul, Mul, Not, Relu, Reshape, Selu, Sigmoid, Sin, Sinh, Softplus, Sub, Tan, Tanh, ThresholdedRelu

when running .fit or .fit_benchmark on the model.

I imagine this operator https://github.com/onnx/onnx/blob/main/docs/Changelog.md#gather-1 is not yet implemented.

Cannot install concrete-ml==1.0.0, ...

Goal: install concrete-ml

Context
MacOS 10.14
Intel
python --version 3.8.2

Steps:
inside a virtual environment handled by conda

Just run pip install concrete-ml

Bug:

ERROR: Cannot install concrete-ml==1.0.0, concrete-ml==1.0.1, concrete-ml==1.0.2 and concrete-ml==1.0.3 because these package versions have conflicting dependencies.

The conflict is caused by:
    concrete-ml 1.0.3 depends on concrete-python==1.0.0
    concrete-ml 1.0.2 depends on concrete-python==1.0.0
    concrete-ml 1.0.1 depends on concrete-python==1.0.0
    concrete-ml 1.0.0 depends on concrete-python==1.0.0

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

Then, if you try to pip install concrete-python or with pip install concrete-python==1.0.0

ERROR: Could not find a version that satisfies the requirement concrete-python (from versions: none)
ERROR: No matching distribution found for concrete-python

or

ERROR: Could not find a version that satisfies the requirement concrete-python==1.0.0 (from versions: none)
ERROR: No matching distribution found for concrete-python==1.0.0

Thus, if you try another logic: pip install concrete-ml==1.0.3

Collecting concrete-ml==1.0.3
  Using cached concrete_ml-1.0.3-py3-none-any.whl (178 kB)
Collecting boto3<2.0.0,>=1.23.5 (from concrete-ml==1.0.3)
  Using cached boto3-1.26.145-py3-none-any.whl (135 kB)
Requirement already satisfied: brevitas==0.8.0 in /Users/mike/anaconda3/envs/fhe_chess/lib/python3.8/site-packages (from concrete-ml==1.0.3) (0.8.0)
INFO: pip is looking at multiple versions of concrete-ml to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement concrete-python==1.0.0 (from concrete-ml) (from versions: none)
ERROR: No matching distribution found for concrete-python==1.0.0

concrete-ml won't import

Hello,
I have installed concrete-ml using pip and followed the steps to install pygraphiz (pygraphiz is working).

_ERROR: Could not find a version that satisfies the requirement concrete-compiler<0.5.0,>=0.4.0 (from concrete-numpy[full]<0.4.0,>=0.3.0->concrete-ml) (from versions: none)
ERROR: No matching distribution found for concrete-compiler<0.5.0,>=0.4.0 (from concrete-numpy[full]<0.4.0,>=0.3.0->concrete-ml)
_

I am using :
ubuntu 20.04.4
python 3.8.10

No matching concrete-python==1.0.0

Summary

Impossible to install latest version with pip

Description

  • versions affected:
  • python version: 3.8 3.9 3.10
  • config (optional: HW, OS): Mac OS M1 / Mac OS intel

Step by step procedure someone should follow to trigger the bug:

minimal POC to trigger the bug

pip install concrete-ml==1.0.2

ERROR: Could not find a version that satisfies the requirement concrete-python==1.0.0 (from concrete-ml) (from versions: none)
ERROR: No matching distribution found for concrete-python==1.0.0

![Снимок экрана 2023-05-21 в 22 09 58](https://github.com/zama-ai/concrete-ml/assets/26391421/5f5de073-f2ff-4b0f-a2e3-405353af7e07)

May I ask how to visualize the graph?

I ran this codeopen("cifar10.graph", "w").write(str(quantized_numpy_module.fhe_circuit)) . Executed this code sentence. And I have obtained the file cifar10.graph,how to visualize this file.

Accuracy mismatch between clear and FHE predictions in CIFAR-10 example

I am trying to run the VGG like NN on CIFAR-10 example found at https://github.com/zama-ai/concrete-ml/tree/main/use_case_examples/cifar/cifar_brevitas_training.

When I am running the example once, I am getting correct results in terms of accuracy (during validation) being similar in clear and FHE predictions. However, when I am running the same code multiple times with same parameters, I am getting a huge difference in the accuracy of clear and FHE predictions. For example, in clear prediction I am getting accuracy of around 97%, which remains similar across the multiple iterations. On the other hand, the accuracy in FHE prediction drops to around 40% to 50% and varies in this range across multiple iterations. I am executing the scripts as obtained from the repository, while only changing the number of training epochs to 10 from default 1000.

I am unable to understand why this is happening. While it is understandable that multiple training iterations will result in models with varied accuracy, such significant drop in the accuracy, that too only in FHE prediction, does not make sense to me. Please let me know if any other code base for CIFAR-10 is available or not.

the security level

Hi, I saw a parameter (n_bits=3) when defining a model. Does it mean the security level? Or when I use it to encrypt inference, does it satisfy 128-bits security?

Client-Server FHE with a non-concrete_ml torch model

Hi,
I'm trying to do Encrypted Inference with my own torch pre-trained model. I am using your client-server example as refence: https://github.com/zama-ai/concrete-ml/blob/release/0.6.x/docs/advanced_examples/ClientServer.ipynb

Description

The steps that I have done are the following: 

  1. I have compiled my torch model using compile_torch_model 
  2. Now when I use client_send_input_to_server_for_prediction(encrypted_input) it raises an error when it reaches this line inside the function:
    encrypted_prediction = FHEModelServer(self.server_dir.name).run( encrypted_input, serialized_evaluation_keys )
    Error output:
    RuntimeError: Number of dimensions did not match the number of expected dimensions

Could you give me some idea about how can I resolve it?
Thank you so much, this library is helping me a lot!

Feature Request: Add support for ScatterElements operator during compilation

Summary

I have implemented a GNN model, quantised using Brevitas, and compiled using 'compile_brevitas_qat_model'.
During the compilation, I face the following error: "ValueError: The following ONNX operators are required to convert the torch model to numpy but are not currently implemented: ScatterElements."

Would it be possible to add ScatterElements as a supported operator, or is there any suggested workarounds? Thank you!

Not installing with pip (python 3.10 not supported workaround?)

NOTE: This is not actually a bug.

Summary

What happened/what you expected to happen?

Not abe to install concrete-ml with pip. Requires Python >=3.8, <3.10 from Pypi concrete-ml 0.2.1.

EDIT: Sorry I printed the pip list from different python (3.9) version.

I do not have concrete-numpy installed.

Description

  • versions affected: 0.1.0; 0.1.1; 0.2.0; 0.2.1.
  • python version: Python 3.10.4
  • pip version: 22.2.2
  • config (optional: HW, OS): Ubuntu 20.04.1 LTS
  • workaround (optional): if you’ve a way to workaround the issue
  • proposed fix (optional): if you’ve a way to fix the issue

Step by step procedure someone should follow to trigger the bug:

$ pip install concrete-ml
ERROR: Ignored the following versions that require a different python version: 0.1.0 Requires-Python >=3.8,<3.10; 0.1.1 Requires-Python >=3.8,<3.10; 0.2.0 Requires-Python >=3.8,<3.10; 0.2.1 Requires-Python >=3.8,<3.10
ERROR: Could not find a version that satisfies the requirement concrete-ml (from versions: none)
ERROR: No matching distribution found for concrete-ml

predict() got an unexpected keyword argument 'fhe'

I am trying to run the logistic regression example provided in "A simple Concrete ML example with scikit-learn" on the concrete ml GitHub page.

However, while running the code, I am getting the following error:

predict() got an unexpected keyword argument 'fhe'

while executing the following statement:

y_pred_fhe = model.predict(X_test, fhe="execute")

I don't know why this is happening. I am running the code on Ubuntu 20.04 LTS. I have installed concrete ml through the "pip" method in Python 3.8.10

I am also receiving the following warning:

PkgResourcesDeprecationWarning: 0.23ubuntu1 is an invalid version and will not be supported in a future release
warnings.warn(

Failed to compile concrete-ml models on Ubuntu

Summary

Cannot compile concrete-ml AI models for example code

Description

  • concrete-ml: 1.4.0
  • python version: 3.8
  • OS: Ubuntu 20.04 (virtual environment)
    or
  • concrete-ml: 1.4.0
  • python version: 3.10.12
  • OS: Ubuntu 22.04 (virtual environment)

On either environment above, step by step procedure someone should follow to trigger the bug:

minimal POC to trigger the bug

pip3 install concrete-ml

Run the code below:

import concrete.ml
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from concrete.ml.sklearn import LogisticRegression
x, y = make_classification(n_samples=100, class_sep=2, n_features=30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)
model = LogisticRegression(n_bits=8)
model.fit(X_train, y_train)
model.compile(X_train)

The result is shown below after compiling the model:

 #0 0x00007f13fccbb761 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) (.localalias) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x13a5761)
 #1 0x00007f13fccb9174 SignalHandler(int) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x13a3174)
 #2 0x00007f14a6ba8090 (/lib/x86_64-linux-gnu/libc.so.6+0x43090)
 #3 0x00007f1401063128 cxx::unwind::prevent_unwind::h027936808a60dbca (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x574d128)
 #4 0x00007f140105a7ba concrete_optimizer::dag::empty() (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x57447ba)
 #5 0x00007f140037f097 mlir::concretelang::optimizer::DagPass::runOnOperation() (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x4a69097)
 #6 0x00007f13fcbfe342 mlir::detail::OpToOpPassAdaptor::run(mlir::Pass*, mlir::Operation*, mlir::AnalysisManager, bool, unsigned int) (.localalias) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x12e8342)
 #7 0x00007f13fcbfe919 mlir::detail::OpToOpPassAdaptor::runPipeline(mlir::OpPassManager&, mlir::Operation*, mlir::AnalysisManager, bool, unsigned int, mlir::PassInstrumentor*, mlir::PassInstrumentation::PipelineParentInfo const*) (.localalias) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x12e8919)
 #8 0x00007f13fcbff923 mlir::detail::OpToOpPassAdaptor::runOnOperationImpl(bool) (.localalias) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x12e9923)
 #9 0x00007f13fcbfe036 mlir::detail::OpToOpPassAdaptor::run(mlir::Pass*, mlir::Operation*, mlir::AnalysisManager, bool, unsigned int) (.localalias) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x12e8036)
#10 0x00007f13fcbfe919 mlir::detail::OpToOpPassAdaptor::runPipeline(mlir::OpPassManager&, mlir::Operation*, mlir::AnalysisManager, bool, unsigned int, mlir::PassInstrumentor*, mlir::PassInstrumentation::PipelineParentInfo const*) (.localalias) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x12e8919)
#11 0x00007f13fcbff421 mlir::PassManager::run(mlir::Operation*) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x12e9421)
#12 0x00007f13fe8b1313 mlir::concretelang::pipeline::getFHEContextFromFHE[abi:cxx11](mlir::MLIRContext&, mlir::ModuleOp&, mlir::concretelang::optimizer::Config, std::function<bool (mlir::Pass*)>) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x2f9b313)
#13 0x00007f13fe88e58b mlir::concretelang::CompilerEngine::getConcreteOptimizerDescription(mlir::concretelang::CompilerEngine::CompilationResult&) (.localalias) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x2f7858b)
#14 0x00007f13fe8935bf mlir::concretelang::CompilerEngine::determineFHEParameters(mlir::concretelang::CompilerEngine::CompilationResult&) (.localalias) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x2f7d5bf)
#15 0x00007f13fe894653 mlir::concretelang::CompilerEngine::compile(mlir::ModuleOp, mlir::concretelang::CompilerEngine::Target, std::optional<std::shared_ptr<mlir::concretelang::CompilerEngine::Library> >) (.localalias) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x2f7e653)
#16 0x00007f13fe898c28 llvm::Expected<mlir::concretelang::CompilerEngine::Library> mlir::concretelang::compileModuleOrSource<mlir::ModuleOp>(mlir::concretelang::CompilerEngine*, mlir::ModuleOp, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, bool, bool, bool) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x2f82c28)
#17 0x00007f13fe8991c4 mlir::concretelang::CompilerEngine::compile(mlir::ModuleOp, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, bool, bool, bool) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x2f831c4)
#18 0x00007f13fca990fd mlir::concretelang::LibrarySupport::compile(mlir::ModuleOp&, std::shared_ptr<mlir::concretelang::CompilationContext>&, mlir::concretelang::CompilationOptions) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x11830fd)
#19 0x00007f13fcaa13f9 library_compile_module(LibrarySupport_Py, mlir::ModuleOp, mlir::concretelang::CompilationOptions, std::shared_ptr<mlir::concretelang::CompilationContext>) (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/libConcretelangBindingsPythonCAPI.so+0x118b3f9)
#20 0x00007f13f91b3d35 (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/_concretelang.cpython-38-x86_64-linux-gnu.so+0x70d35)
#21 0x00007f13f918a1db (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/_concretelang.cpython-38-x86_64-linux-gnu.so+0x471db)
#22 0x00007f13f91689c7 (/home/weimin/.local/lib/python3.8/site-packages/mlir/_mlir_libs/_concretelang.cpython-38-x86_64-linux-gnu.so+0x259c7)
#23 0x00000000005d5499 PyCFunction_Call (/usr/bin/python3.8+0x5d5499)
#24 0x00000000005d6066 _PyObject_MakeTpCall (/usr/bin/python3.8+0x5d6066)
#25 0x00000000004e22b3 (/usr/bin/python3.8+0x4e22b3)
#26 0x000000000054c8a9 _PyEval_EvalFrameDefault (/usr/bin/python3.8+0x54c8a9)
#27 0x000000000054552a _PyEval_EvalCodeWithName (/usr/bin/python3.8+0x54552a)
#28 0x00000000005d5a23 _PyFunction_Vectorcall (/usr/bin/python3.8+0x5d5a23)
#29 0x0000000000547447 _PyEval_EvalFrameDefault (/usr/bin/python3.8+0x547447)
#30 0x000000000054552a _PyEval_EvalCodeWithName (/usr/bin/python3.8+0x54552a)
#31 0x00000000005d5a23 _PyFunction_Vectorcall (/usr/bin/python3.8+0x5d5a23)
#32 0x00000000005483b6 _PyEval_EvalFrameDefault (/usr/bin/python3.8+0x5483b6)
#33 0x00000000005d5846 _PyFunction_Vectorcall (/usr/bin/python3.8+0x5d5846)
#34 0x0000000000547447 _PyEval_EvalFrameDefault (/usr/bin/python3.8+0x547447)
#35 0x000000000054552a _PyEval_EvalCodeWithName (/usr/bin/python3.8+0x54552a)
#36 0x00000000005d5a23 _PyFunction_Vectorcall (/usr/bin/python3.8+0x5d5a23)
#37 0x0000000000579c7d (/usr/bin/python3.8+0x579c7d)
#38 0x00000000005d5fcf _PyObject_MakeTpCall (/usr/bin/python3.8+0x5d5fcf)
#39 0x000000000054ca58 _PyEval_EvalFrameDefault (/usr/bin/python3.8+0x54ca58)
#40 0x000000000054552a _PyEval_EvalCodeWithName (/usr/bin/python3.8+0x54552a)
#41 0x00000000004e1bd0 (/usr/bin/python3.8+0x4e1bd0)
#42 0x00000000005483b6 _PyEval_EvalFrameDefault (/usr/bin/python3.8+0x5483b6)
#43 0x000000000054552a _PyEval_EvalCodeWithName (/usr/bin/python3.8+0x54552a)
#44 0x00000000004e1bd0 (/usr/bin/python3.8+0x4e1bd0)
#45 0x000000000054c8a9 _PyEval_EvalFrameDefault (/usr/bin/python3.8+0x54c8a9)
#46 0x000000000054552a _PyEval_EvalCodeWithName (/usr/bin/python3.8+0x54552a)
#47 0x0000000000684327 PyEval_EvalCode (/usr/bin/python3.8+0x684327)
#48 0x0000000000673a41 (/usr/bin/python3.8+0x673a41)
#49 0x0000000000673abb (/usr/bin/python3.8+0x673abb)
#50 0x0000000000488acc (/usr/bin/python3.8+0x488acc)
#51 0x000000000048947a PyRun_InteractiveLoopFlags (/usr/bin/python3.8+0x48947a)
#52 0x0000000000674a19 PyRun_AnyFileExFlags (/usr/bin/python3.8+0x674a19)
#53 0x00000000004c41f6 (/usr/bin/python3.8+0x4c41f6)
#54 0x00000000006b43fd Py_BytesMain (/usr/bin/python3.8+0x6b43fd)
#55 0x00007f14a6b89083 __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24083)
#56 0x00000000005da67e _start (/usr/bin/python3.8+0x5da67e)

concrete-ml installation issue

I am unable to resolve following error while installing:
ERROR: Could not find a version that satisfies the requirement concrete-compiler<0.5.0,>=0.4.0 (from concrete-numpy[full]) (from versions: none)
ERROR: No matching distribution found for concrete-compiler<0.5.0,>=0.4.0

=============
I followed following 2 steps:

  1. docker pull zamafhe/concrete-ml:latest
  2. pip install concrete-ml>s1.txt

s1.txt has been attached
s1.txt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.