Git Product home page Git Product logo

fastinference's Introduction

fastinference

A collection of inference modules for fastai including inference speedup and interpretability

Install

pip install fastinference

There are submodules available as well via:

  • pip install fastinference[interp] - Interpretability modules such as SHAP and Feature Importance
  • pip install fastinference[onnx-cpu] - ONNX for a CPU environment
  • pip install fastinference[onnx-gpu] - ONNX for a GPU environment

Wonderful Contributors:

(Using both their fastai handles as well as their GitHub handles if possible):

  • Pavel (Pak)

fastinference's People

Contributors

apollo-xi avatar dependabot[bot] avatar hal-314 avatar meanderingstream avatar muellerzr avatar ncduy0303 avatar polyrand avatar rsomani95 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

fastinference's Issues

CUDA Out of Memory When Running Inference on a Large Test Set (And Proposed Fixes)

In inference.py's get_preds function, the inputs are being stored regardless of whether fully_decoded is True or False. These inputs are being stored on the GPU, and are only released from memory after the inference loop is finished.

As the size of the test set increases, the GPU will at some point run out of memory if the test set is large enough (In my test case, I ran out of memory with <50,000 items with an EfficientNet-B3A and image size of 224x224).

There's 2 ways to fix this:

1. store the inputs on CPU (I haven't tested this).

with torch.no_grad():
    if is_multi:
        for i in range(x.dls.n_inp):
            #inps[i].append(batch[i])
            inps[i].append(batch[i].cpu())
    else:
        #inps.append(batch[:x.dls.n_inp])
        inps.append(batch[:x.dls.n_inp].cpu())
    # rest of the loop

2. Skip storing the inputs altogether if fully_decoded is False, as it's redundant. This should provide some speedup too. (Tested)

with torch.no_grad():
    if fully_decoded:
        if is_multi:
            for i in range(x.dls.n_inp):
                inps[i].append(batch[i])
        else:
            inps.append(batch[:x.dls.n_inp])
    # rest of the loop

I've made this modification in my code and can vouch that it works. I can pass in arbitrarily long test sets and there's no issues.


I think (2) should be done regardless. I'm not sure about (1) as converting to .cpu() is an overhead, but perhaps we should leave this to the user (if they want to prioritise inference speed, they'd probably be skipping this anyways).

Dendrogram correlates incorrectly continuous and categorical variables

Hi

I notice that the current dendrogram implementation doesn't differentiate between categorical and continuous variables. It compute correlation as if they were categorical variables through Cramer V statistic. However, this statistic is designed to be employed with categorical variables (wikipedia).

For continuous variables, Spearman or Kendall correlation. I would recommend to use Kendall as Spearman supose that relationship between them is always positive or negative (from wikipedia: "It assesses how well the relationship between two variables can be described using a monotonic function").

Here is an example of misleading users. You can see correlation between continuous variables estimated by Cramer V, Spearman and Kendall:
Screenshot 2020-10-21 at 10 46 41

Cramer V correlation is quite different than Spearman or Kendall.

In case of ordinal variables, you could treat them as categorical or continuous variables if kendall or spearman correlation is used.

I don't know how to assess relationship between categorical and continuous variables. I would use Kruskal-Wallis test (non parametric version of one-way ANOVA) to test if it exists. However, I don't know how to quantify it :/ .

Here is a nice introduction to the problem.

So, current plot_dendrogram implementation can mislead users. My proposition is to change the current plot_dendrogram function so:

  1. Perform two dendrogram, one for categorical variables and another for continuous features.
  2. Optionally, allow to pass which variables variables will be treated as categorical and which as continuous.
  3. Make passing a dataframe optional. By default, make use of training dataframe.

I'll make a PR to fix it.

EDIT: the same applies to get_top_corr_dict

error in fully_decoded: ValueError: only one element tensors can be converted to Python scalars

This happens when enabling fully_decoded in get_preds. I don't know where this comes from, any ideas of where to start looking?

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-62-6fffcf6bb38e> in <module>
----> 1 foo = learn.get_preds(fully_decoded=True)

~/.local/lib/python3.6/site-packages/fastinference/inference/inference.py in get_preds(x, ds_idx, dl, raw_outs, decoded_loss, fully_decoded, **kwargs)
     70     else:
     71         outs.insert(0, raw)
---> 72     if fully_decoded: outs = _fully_decode(x.dls, inps, outs, dec_out, is_multi)
     73     if decoded_loss: outs = _decode_loss(x.dls.vocab, dec_out, outs)
     74     return outs

~/.local/lib/python3.6/site-packages/fastinference/inference/inference.py in _fully_decode(dl, inps, outs, dec_out, is_multi)
     14             inps[i] = torch.cat(inps[i], dim=0)
     15     else:
---> 16         inps = tensor(*inps[0])
     17     b = (*tuplify(inps), *tuplify(dec_out))
     18     try:

/usr/local/lib/python3.6/dist-packages/fastai2/torch_core.py in tensor(x, *rest, **kwargs)
    108     # if isinstance(x, (tuple,list)) and len(x)==0: return tensor(0)
    109     res = (x if isinstance(x, Tensor)
--> 110            else torch.tensor(x, **kwargs) if isinstance(x, (tuple,list))
    111            else _array2tensor(x) if isinstance(x, ndarray)
    112            else as_tensor(x.values, **kwargs) if isinstance(x, (pd.Series, pd.DataFrame))

ValueError: only one element tensors can be converted to Python scalars

SHAP won't install "from fastinference.tabular import *" Fails

Trying to install, use SHAP.

!pip install fastinference fastai -q, this works

from fastinference.tabular import * produces:

ModuleNotFoundError Traceback (most recent call last)
in ()
----> 1 from fastinference.tabular import *

3 frames
/usr/local/lib/python3.6/dist-packages/fastinference/tabular/init.py in ()
3 raise ImportError("The interp module is not installed.")
4
----> 5 from .shap import *
6 from .interpretation import *

/usr/local/lib/python3.6/dist-packages/fastinference/tabular/shap/init.py in ()
----> 1 from .interp import *

/usr/local/lib/python3.6/dist-packages/fastinference/tabular/shap/interp.py in ()
4
5 # Cell
----> 6 from .core import _prepare_data, _predict
7 import shap
8 from fastai.tabular.all import *

/usr/local/lib/python3.6/dist-packages/fastinference/tabular/shap/core.py in ()
4
5 # Cell
----> 6 from fastai.tabular.all import *
7
8 # Cell

ModuleNotFoundError: No module named 'fastai.tabular.all'


NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.

Have also tried: !pip install fastinference[interp] -q and no luck. Thanks!

NameError: name 'cat_names' is not defined

Hello

I run this code:

from fastai.tabular.all import *
from fastinference.inference.export import to_fastinference

path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')

splits = RandomSplitter()(range_of(df))
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
y_names = 'salary'

dls = TabularPandas(df, procs=procs, cat_names=cat_names, cont_names=cont_names,
                   y_names=y_names, splits=splits).dataloaders()
learn = tabular_learner(dls, layers=[200,100])

learn.to_fastinference()

and got the error:

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-11-55f457cbac14> in <module>
----> 1 learn.to_fastinference()

~/.virtualenvs/temp/lib/python3.6/site-packages/fastinference/inference/export.py in to_fastinference(x, data_fname, model_fname, path)
     89     "Export data for `fastinference_onnx` or `_pytorch` to use"
     90     if not isinstance(path,Path): path = Path(path)
---> 91     dicts = get_information(x.dls)
     92     with open(path/f'{data_fname}.pkl', 'wb') as handle:
     93         pickle.dump(dicts, handle, protocol=pickle.HIGHEST_PROTOCOL)

~/.virtualenvs/temp/lib/python3.6/site-packages/fastinference/inference/export.py in get_information(dls)
     45 
     46 # Cell
---> 47 def get_information(dls): return _extract_tfm_dicts(dls[0])
     48 
     49 # Cell

~/.virtualenvs/temp/lib/python3.6/site-packages/fastcore/dispatch.py in __call__(self, *args, **kwargs)
    127         elif self.inst is not None: f = MethodType(f, self.inst)
    128         elif self.owner is not None: f = MethodType(f, self.owner)
--> 129         return f(*args, **kwargs)
    130 
    131     def __get__(self, inst, owner):

~/.virtualenvs/temp/lib/python3.6/site-packages/fastinference/inference/export.py in _extract_tfm_dicts(dl)
     60     name2idx = {name:n for n,name in enumerate(dl.dataset) if name in dl.cat_names or name in dl.cont_names}
     61     idx2name = {v:k for k,v in name2idx.items()}
---> 62     cat_idxs = {name2idx[name]:name for name in cat_names}
     63     cont_idxs = {name2idx[name]:name for name in cont_names}
     64     names = {'cats':cat_idxs, 'conts':cont_idxs}

NameError: name 'cat_names' is not defined

Please advise

02_shap.interp.ipynb error?

Hi, I tried running this notebook and got an error at this line:

exp = ShapInterpretation(learn)
exp.decision_plot(class_id=0, row_idx=10)

TypeError                                 Traceback (most recent call last)
<ipython-input-12-907d69fd87c2> in <module>
----> 1 exp.decision_plot(class_id=0, row_idx=10)

<ipython-input-8-27630e1fb799> in decision_plot(self, class_id, row_idx, **kwargs)
     16     def decision_plot(self, class_id=0, row_idx=-1, **kwargs):
     17         "Visualize model decision using cumulative `SHAP` values."
---> 18         shap_vals, exp_val = _get_values(self, class_id)
     19         n_rows = shap_vals.shape[0]
     20         if row_idx == -1:

<ipython-input-10-91e394550211> in _get_values(interp, class_id)
      5     exp_vals = interp.explainer.expected_value
      6     if interp.is_multi_output:
----> 7         (class_name, class_idx) = _get_class_info(interp, class_id)
      8         print(f"Classification model detected, displaying score for the class {class_name}.")
      9         print("(use `class_id` to specify another class)")

<ipython-input-9-96fbf3aee7cf> in _get_class_info(interp, class_id)
      2 def _get_class_info(interp:ShapInterpretation, class_id):
      3     "Returns class name associated with index, or vice-versa"
----> 4     if isinstance(class_id, int): class_idx, class_name = class_id, interp.class_names[class_id]
      5     else: class_idx, class_name = interp.class_names.o2i[class_id], class_id
      6     return (class_name, class_idx)

TypeError: 'NoneType' object is not subscriptable

Can't run notebooks

Hello, thanks for working on this. I am trying to convert text and image models I trained with the current fastiai (2.7.11) to onnx and this library seems like what I need. Is this project still being updated?

I have tried to run a bunch of the notebooks without success, including:
https://github.com/muellerzr/fastblog/blob/master/_notebooks/2020-06-08-fastinference.ipynb
https://github.com/muellerzr/fastinference/blob/master/nbs/03_onnx.ipynb

I am working on colab and trying to set up the right environment. First, I noticed that you are importing fastai2. Now that fastai v2 is stable, does fastinference still work with the current version of fastai?

When trying to import fastinference after pip installing, I keep getting:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
[<ipython-input-9-dd2f399daef1>](https://localhost:8080/#) in <module>
      2 from fastinference.soft_dependencies import SoftDependencies
      3 if not SoftDependencies.check()['onnxcpu'] and not SoftDependencies.check()['onnxgpu']:
----> 4     raise ImportError("The onnxcpu or onnxgpu module is not installed.")

ImportError: The onnxcpu or onnxgpu module is not installed.

I have fastai, fastinference and onnx installed. What else do I need? I mainly just want to be able to replicate your notebooks first to understand the library and eventually, convert my fastai models to onnx.

Thanks!

New fastai inference API

Hey Zach,
Let's work here prototyping the inference.
What I would like to have (Santa's wishful letter).

  • Streamlined torchscript support on all fastai models, simple models should be compatible with jit.trace and more complex ones, with decisions with jit.script. The guys at facebook may be able to help here, they are super interested on this right now. The user should have simple image preprocessing/posprocessing to make inference work once the models is exported on plain pytorch. If jeremy splits the fastai lib on core/vision/etc... we could depende on the fastai core's.
  • ONNX: Exporting on all models, image encoders should work out of the box, some layers are missing for Unet's (PixelShuffle). Tabular should work also. Without being an expert, I would expect that torchscript replaces the ONNX pipeline in the future, one less layer.
  • RTTorch: We should start discussing with them probably, as the TensorRT frameworks is super fast for GPU inference. This could be done latter, once we have ONNX exports. I have a contact at NVIDIA that could help us export to TensorRT.
  • DeepStream? Stas Beckman is a guru on this topic, we could ask him what he thinks about it.

We should have tests that periodically verify that this functionality is not broken, and the performance is maintained. This is something fastai does not have right now and it needs, e.g., fastai's unet is slower than before, noted this the other day.

Another cool thing, would be to directly serve the model with torch.serve directly from fastai. Like,

learn.serve(port=5151)

and get a service running to make inference over HTTP.

ImportError: The interp module is not installed

Hi, I wanted to test SHAP with fastinference, but there is an import error. how to fix this? Thanks!

ImportError                               Traceback (most recent call last)
<ipython-input-7-d27f1c452f1a> in <module>
----> 1 from fastinference.tabular.shap.core import _prepare_data, _prepare_test_data, _predict
      2 import shap
      3 from fastai.tabular.all import *

~/anaconda3/envs/ab/lib/python3.7/site-packages/fastinference/tabular/__init__.py in <module>
      1 from ..soft_dependencies import SoftDependencies
      2 if not SoftDependencies.check()['interp']:
----> 3     raise ImportError("The interp module is not installed.")
      4 
      5 from .shap import *

ImportError: The interp module is not installed.

Embedding gradients not being computed for intrinsic attention

According to the preprint cited at the original fastai implementation implementation, the sensitivity of outputs to inputs is given by the gradient of the outputs w.r.t inputs. However, in the source code of intrisinc_attention, the line computing the sensitivity does not use the gradient of the embeddings. This results in the same attention being output, no matter the class_id being used:
attn = emb.squeeze().abs().sum(dim=-1)

Below you can see the output intrinsic attention of the model using the current code for different class_id's:


learn.intrinsic_attention(text=text,class_id=9,cmap=cm.RdYlGn_r)
TensorText([0.9443, 0.7151, 0.9200, 0.3891, 0.7568, 0.6479, 0.3891, 0.6266, 1.0000,
        0.3891, 0.6479, 0.7568, 0.6417, 0.9947, 0.6417, 0.6417, 0.6417, 0.7568,
        0.6479, 0.9313, 0.3891, 0.3891, 0.7151, 0.3891, 0.3891, 0.9313, 0.9200,
        0.7151, 0.9255, 0.6417, 0.6417, 0.9947, 0.9255, 0.3891, 0.7568, 0.9200,
        0.7151, 0.9255, 0.7151, 1.0000, 0.9255, 0.6417, 1.0000, 0.6417, 0.6417,
        0.6417, 0.7568, 0.7754, 0.9255, 0.9291, 1.0000],

learn.intrinsic_attention(text=text,class_id=0,cmap=cm.RdYlGn_r)
ensorText([0.9443, 0.7151, 0.9200, 0.3891, 0.7568, 0.6479, 0.3891, 0.6266, 1.0000,
        0.3891, 0.6479, 0.7568, 0.6417, 0.9947, 0.6417, 0.6417, 0.6417, 0.7568,
        0.6479, 0.9313, 0.3891, 0.3891, 0.7151, 0.3891, 0.3891, 0.9313, 0.9200,
        0.7151, 0.9255, 0.6417, 0.6417, 0.9947, 0.9255, 0.3891, 0.7568, 0.9200,
        0.7151, 0.9255, 0.7151, 1.0000, 0.9255, 0.6417, 1.0000, 0.6417, 0.6417,
        0.6417, 0.7568, 0.7754, 0.9255, 0.9291, 1.0000],
       grad_fn=<AliasBackward>)

learn.intrinsic_attention(text=text,class_id=-1,cmap=cm.RdYlGn_r)
TensorText([0.9443, 0.7151, 0.9200, 0.3891, 0.7568, 0.6479, 0.3891, 0.6266, 1.0000,
        0.3891, 0.6479, 0.7568, 0.6417, 0.9947, 0.6417, 0.6417, 0.6417, 0.7568,
        0.6479, 0.9313, 0.3891, 0.3891, 0.7151, 0.3891, 0.3891, 0.9313, 0.9200,
        0.7151, 0.9255, 0.6417, 0.6417, 0.9947, 0.9255, 0.3891, 0.7568, 0.9200,
        0.7151, 0.9255, 0.7151, 1.0000, 0.9255, 0.6417, 1.0000, 0.6417, 0.6417,
        0.6417, 0.7568, 0.7754, 0.9255, 0.9291, 1.0000],
       grad_fn=<AliasBackward>)

In the original implementation from fastai_v1, the gradient is used (ln 65: attn = emb.grad.squeeze().abs().sum(dim=-1)), and the intrisinc_attention function returns different attentions for each class.

"NameError: name '_ConstantFunc' is not defined"

from fastinference.inference import * (worked fine)

%%time for i in range(40): learn.predict(img)

This code returned:

~/miniconda3/envs/fastai2/lib/python3.6/site-packages/fastinference/inference/inference.py in predict(x, item, with_input, rm_type_tfms)
     79 def predict(x:Learner, item, with_input=False, rm_type_tfms=None):
     80         dl = x.dls.test_dl([item], rm_type_tfms=rm_type_tfms, num_workers=0)
---> 81         res = x.get_preds(dl=dl, with_input=with_input, with_decoded=True)
     82         return res

~/miniconda3/envs/fastai2/lib/python3.6/site-packages/fastinference/inference/inference.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, raw, act, inner, reorder, cbs, **kwargs)
     55     if reorder and hasattr(dl, 'get_idxs'):
     56         idxs = dl.get_idxs()
---> 57         dl = dl.new(get_idxs = _ConstantFunc(idxs))
     58     cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs)
     59     ctx_mgrs = self.validation_context(cbs=L(cbs)+[cb], inner=inner)

NameError: name '_ConstantFunc' is not defined

It seems that to run a predict with fastinference I need to have a databunch loaded with the learner itself.

The steps I followed:
1 - Import fastai
2 - Load a pre-trained model (this model was exported with .export()) with learn = load_learner('PATH_OF_LEARNER', cpu=False)
3 - Predict with a image loaded with opencv (cv2.imread(image_path))
4 - Import fastinference with from fastinference.inference import * (no error returned)
5 - Repeat step 3 and got the error.

Error importing interpretation module

It appears this is being triggered where get_features_core is being passed to the delegates decorator. I switched out get_features_core for TabularLearner.get_features_core, and this resolved the immediate error, but seemed to cause issues further downstream.

I don't know enough about the fastcore or fastinference libraries, to understand the desired behavior here

>>> from fastinference.tabular.interpretation import *
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/lib/python3.8/site-packages/fastinference/tabular/interpretation.py", line 138, in <module>
    def get_top_features_corr(x:TabularLearner, df:Optional[pd.DataFrame]=None, thresh:float=0.8, **kwargs):
  File "/lib/python3.8/site-packages/fastcore/meta.py", line 111, in _f
    if to is None: to_f,from_f = f.__base__.__init__,f.__init__
AttributeError: 'function' object has no attribute '__base__'

Remove package from results, alternative option

Here is a new option to use that is better:

    package_name:str, # The name of a python package
    depth_limit:int=1, # How deep to follow nested dependencies
) -> dict: # A dictionary of {package:version}
    "Recursively grabs dependencies of python package"
    pkgs = pipdeptree.get_installed_distributions(local_only=False, user_only=False)
    tree = pipdeptree.PackageDAG.from_pkgs(pkgs)
    tree = tree.filter([package_name], None)
    curr_depth=0
    def _get_deps(j, dep_dict={}, curr_depth=0):
        if curr_depth > depth_limit: return dep_dict
        if isinstance(j, list):
            for a in j:
                _get_deps(a, dep_dict, curr_depth)
        elif isinstance(j, dict):
            if 'package_name' in j.keys():
                if j['package_name'] not in dep_dict.keys() and j['package_name'] != package_name:
                    dep_dict[j['package_name']] = j['installed_version']
            if 'dependencies' in j.keys():
                curr_depth += 1
                return _get_deps(j['dependencies'], dep_dict, curr_depth)
        return dep_dict
    return _get_deps(ast.literal_eval(pipdeptree.render_json_tree(tree, 4)), {})

Torchvision models do not seem to support `dynamic_axes` in ONNX export

Hi Zach,
we briefly interacted on Discord about this problem and you kindly suggested to open an issue here as well.

My problem is explained here.
According to this conversation it seems this is driven by torchvision not supporting dynamic_axes for ONNX quite yet. I have commented there, hoping to get a reply from the devs.

On top of the code provided in the PyTorch forums, I have also:

  • tried your solution
  • tried adapting this gist

with no luck yet.

It would be great to hear your thoughts as I am running out of ideas! :)

Fastinfernce has no tabular attirbute

Hi. The recent version of fastai 2.1.17 started throwing a lot of errors. One of them is related to fastinference.
When I call Shap interpretation, it throws:

AttributeError: module 'fastinference' has no attribute 'tabular'

The onnx-cpu extra is installing fastai2

I haven't been able to see what's going on.

If I install onnxruntime on one side and fastinference on the other, everything works fine*. But if I install fastinference[onnx-cpu] it installs the old fastai2 package, and it breaks.

  • I'm getting errors in the to_onnx patched function, but I'm still looking for what's going on to report better. But when I say "fine" I mean the the SoftDependencies check is not raising an error and I can import both from fastinference.fastinference import * and from fastinference.onnx import *

pip install fastinference[interp] doesn't work

Running pip install fastinference[interp] is not working. I have the latest version of SHAP (0.39) already installed. Seems like it is crashing when trying to run setup.py for SHAP. fastinference seems to depend on SHAP 0.35.

(fastai) PS C:\work> pip install fastinference[interp]
Requirement already satisfied: fastinference[interp] in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (0.0.36)
Requirement already satisfied: fastai>=2.0.0 in c:\work\ml\fastai (from fastinference[interp]) (2.3.1)
Collecting shap<0.36.0
  Using cached shap-0.35.0.tar.gz (273 kB)
Collecting plotly
  Using cached plotly-4.14.3-py2.py3-none-any.whl (13.2 MB)
Collecting plotnine
  Using cached plotnine-0.8.0-py3-none-any.whl (4.7 MB)
Requirement already satisfied: pip in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (21.0.1)
Requirement already satisfied: packaging in c:\users\pc\appdata\roaming\python\python38\site-packages (from fastai>=2.0.0->fastinference[interp]) (20.4)
Requirement already satisfied: fastcore<1.4,>=1.3.8 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (1.3.20)
Requirement already satisfied: torchvision>=0.8.2 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (0.9.1)
Requirement already satisfied: matplotlib in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (3.3.4)
Requirement already satisfied: pandas in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (1.2.4)
Requirement already satisfied: requests in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (2.25.1)
Requirement already satisfied: pyyaml in c:\users\pc\appdata\roaming\python\python38\site-packages (from fastai>=2.0.0->fastinference[interp]) (5.3.1)
Requirement already satisfied: fastprogress>=0.2.4 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (1.0.0)
Requirement already satisfied: pillow>6.0.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (8.1.2)
Requirement already satisfied: scikit-learn in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (0.24.1)
Requirement already satisfied: scipy in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (1.6.1)
Requirement already satisfied: spacy<4 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (3.0.5)
Requirement already satisfied: torch<1.9,>=1.7.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastai>=2.0.0->fastinference[interp]) (1.8.1)
Requirement already satisfied: numpy in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from fastprogress>=0.2.4->fastai>=2.0.0->fastinference[interp]) (1.20.1)
Requirement already satisfied: tqdm>4.25.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from shap<0.36.0->fastinference[interp]) (4.59.0)
Requirement already satisfied: typer<0.4.0,>=0.3.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (0.3.2)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (3.0.5)
Requirement already satisfied: jinja2 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (3.0.1)
Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (3.0.1)
Requirement already satisfied: catalogue<2.1.0,>=2.0.1 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (2.0.1)
Requirement already satisfied: srsly<3.0.0,>=2.4.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (2.4.0)
Requirement already satisfied: setuptools in c:\users\pc\appdata\roaming\python\python38\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (50.3.0)
Requirement already satisfied: blis<0.8.0,>=0.4.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (0.7.4)
Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (0.8.2)
Requirement already satisfied: pathy>=0.3.5 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (0.4.0)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (2.0.5)
Requirement already satisfied: pydantic<1.8.0,>=1.7.1 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (1.7.3)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (1.0.5)
Requirement already satisfied: thinc<8.1.0,>=8.0.2 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from spacy<4->fastai>=2.0.0->fastinference[interp]) (8.0.2)
Requirement already satisfied: six in c:\users\pc\appdata\roaming\python\python38\site-packages (from packaging->fastai>=2.0.0->fastinference[interp]) (1.15.0)
Requirement already satisfied: pyparsing>=2.0.2 in c:\users\pc\appdata\roaming\python\python38\site-packages (from packaging->fastai>=2.0.0->fastinference[interp]) (2.4.7)
Requirement already satisfied: smart-open<4.0.0,>=2.2.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from pathy>=0.3.5->spacy<4->fastai>=2.0.0->fastinference[interp]) (2.2.1)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from requests->fastai>=2.0.0->fastinference[interp]) (1.26.4)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from requests->fastai>=2.0.0->fastinference[interp]) (2020.12.5)
Requirement already satisfied: chardet<5,>=3.0.2 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from requests->fastai>=2.0.0->fastinference[interp]) (4.0.0)
Requirement already satisfied: idna<3,>=2.5 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from requests->fastai>=2.0.0->fastinference[interp]) (2.10)
Requirement already satisfied: boto3 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from smart-open<4.0.0,>=2.2.0->pathy>=0.3.5->spacy<4->fastai>=2.0.0->fastinference[interp]) (1.17.33)
Requirement already satisfied: typing-extensions in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from torch<1.9,>=1.7.0->fastai>=2.0.0->fastinference[interp]) (3.7.4.3)
Requirement already satisfied: click<7.2.0,>=7.1.1 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from typer<0.4.0,>=0.3.0->spacy<4->fastai>=2.0.0->fastinference[interp]) (7.1.2)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from boto3->smart-open<4.0.0,>=2.2.0->pathy>=0.3.5->spacy<4->fastai>=2.0.0->fastinference[interp]) (0.10.0)
Requirement already satisfied: botocore<1.21.0,>=1.20.33 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from boto3->smart-open<4.0.0,>=2.2.0->pathy>=0.3.5->spacy<4->fastai>=2.0.0->fastinference[interp]) (1.20.33)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from boto3->smart-open<4.0.0,>=2.2.0->pathy>=0.3.5->spacy<4->fastai>=2.0.0->fastinference[interp]) (0.3.6)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\users\pc\appdata\roaming\python\python38\site-packages (from botocore<1.21.0,>=1.20.33->boto3->smart-open<4.0.0,>=2.2.0->pathy>=0.3.5->spacy<4->fastai>=2.0.0->fastinference[interp]) (2.8.1)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from jinja2->spacy<4->fastai>=2.0.0->fastinference[interp]) (2.0.1)
Requirement already satisfied: cycler>=0.10 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from matplotlib->fastai>=2.0.0->fastinference[interp]) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from matplotlib->fastai>=2.0.0->fastinference[interp]) (1.3.1)
Requirement already satisfied: pytz>=2017.3 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from pandas->fastai>=2.0.0->fastinference[interp]) (2021.1)
Requirement already satisfied: retrying>=1.3.3 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from plotly->fastinference[interp]) (1.3.3)
Requirement already satisfied: patsy>=0.5.1 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from plotnine->fastinference[interp]) (0.5.1)
Requirement already satisfied: mizani>=0.7.3 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from plotnine->fastinference[interp]) (0.7.3)
Requirement already satisfied: statsmodels>=0.12.1 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from plotnine->fastinference[interp]) (0.12.2)
Requirement already satisfied: descartes>=1.1.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from plotnine->fastinference[interp]) (1.1.0)
Requirement already satisfied: palettable in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from mizani>=0.7.3->plotnine->fastinference[interp]) (3.3.0)
Requirement already satisfied: joblib>=0.11 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from scikit-learn->fastai>=2.0.0->fastinference[interp]) (1.0.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\pc\miniconda3\envs\fastai\lib\site-packages (from scikit-learn->fastai>=2.0.0->fastinference[interp]) (2.1.0)
Building wheels for collected packages: shap
  Building wheel for shap (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: 'c:\users\pc\miniconda3\envs\fastai\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\pc\\AppData\\Local\\Temp\\pip-install-hejpe705\\shap_2eff441acfc04fe3bd0ac670c603bf74\\setup.py'"'"'; __file__='"'"'C:\\Users\\pc\\AppData\\Local\\Temp\\pip-install-hejpe705\\shap_2eff441acfc04fe3bd0ac670c603bf74\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'c:\users\pc\AppData\Local\Temp\pip-wheel-gcnda71l'
       cwd: c:\users\pc\AppData\Local\Temp\pip-install-hejpe705\shap_2eff441acfc04fe3bd0ac670c603bf74\
  Complete output (67 lines):
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build\lib.win-amd64-3.8
  creating build\lib.win-amd64-3.8\shap
  copying shap\common.py -> build\lib.win-amd64-3.8\shap
  copying shap\datasets.py -> build\lib.win-amd64-3.8\shap
  copying shap\__init__.py -> build\lib.win-amd64-3.8\shap
  creating build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\additive.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\bruteforce.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\explainer.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\gradient.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\kernel.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\linear.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\mimic.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\partition.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\permutation.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\pytree.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\sampling.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\tf_utils.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\tree.py -> build\lib.win-amd64-3.8\shap\explainers
  copying shap\explainers\__init__.py -> build\lib.win-amd64-3.8\shap\explainers
  creating build\lib.win-amd64-3.8\shap\explainers\other
  copying shap\explainers\other\coefficent.py -> build\lib.win-amd64-3.8\shap\explainers\other
  copying shap\explainers\other\lime.py -> build\lib.win-amd64-3.8\shap\explainers\other
  copying shap\explainers\other\maple.py -> build\lib.win-amd64-3.8\shap\explainers\other
  copying shap\explainers\other\random.py -> build\lib.win-amd64-3.8\shap\explainers\other
  copying shap\explainers\other\treegain.py -> build\lib.win-amd64-3.8\shap\explainers\other
  copying shap\explainers\other\__init__.py -> build\lib.win-amd64-3.8\shap\explainers\other
  creating build\lib.win-amd64-3.8\shap\explainers\deep
  copying shap\explainers\deep\deep_pytorch.py -> build\lib.win-amd64-3.8\shap\explainers\deep
  copying shap\explainers\deep\deep_tf.py -> build\lib.win-amd64-3.8\shap\explainers\deep
  copying shap\explainers\deep\__init__.py -> build\lib.win-amd64-3.8\shap\explainers\deep
  creating build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\bar.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\colorconv.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\colors.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\decision.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\dependence.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\embedding.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\force.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\force_matplotlib.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\image.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\monitoring.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\partial_dependence.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\summary.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\text.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\waterfall.py -> build\lib.win-amd64-3.8\shap\plots
  copying shap\plots\__init__.py -> build\lib.win-amd64-3.8\shap\plots
  creating build\lib.win-amd64-3.8\shap\benchmark
  copying shap\benchmark\experiments.py -> build\lib.win-amd64-3.8\shap\benchmark
  copying shap\benchmark\measures.py -> build\lib.win-amd64-3.8\shap\benchmark
  copying shap\benchmark\methods.py -> build\lib.win-amd64-3.8\shap\benchmark
  copying shap\benchmark\metrics.py -> build\lib.win-amd64-3.8\shap\benchmark
  copying shap\benchmark\models.py -> build\lib.win-amd64-3.8\shap\benchmark
  copying shap\benchmark\plots.py -> build\lib.win-amd64-3.8\shap\benchmark
  copying shap\benchmark\__init__.py -> build\lib.win-amd64-3.8\shap\benchmark
  creating build\lib.win-amd64-3.8\shap\plots\resources
  copying shap\plots\resources\bundle.js -> build\lib.win-amd64-3.8\shap\plots\resources
  copying shap\plots\resources\logoSmallGray.png -> build\lib.win-amd64-3.8\shap\plots\resources
  copying shap\tree_shap.h -> build\lib.win-amd64-3.8\shap
  running build_ext
  numpy.get_include() c:\users\pc\miniconda3\envs\fastai\lib\site-packages\numpy\core\include
  building 'shap._cext' extension
  error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
  ----------------------------------------
  ERROR: Failed building wheel for shap
  Running setup.py clean for shap
Failed to build shap
Installing collected packages: shap, plotnine, plotly
  Attempting uninstall: shap
    Found existing installation: shap 0.39.0
    Uninstalling shap-0.39.0:
      Successfully uninstalled shap-0.39.0
    Running setup.py install for shap ... error
    ERROR: Command errored out with exit status 1:
     command: 'c:\users\pc\miniconda3\envs\fastai\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\pc\\AppData\\Local\\Temp\\pip-install-hejpe705\\shap_2eff441acfc04fe3bd0ac670c603bf74\\setup.py'"'"'; __file__='"'"'C:\\Users\\pc\\AppData\\Local\\Temp\\pip-install-hejpe705\\shap_2eff441acfc04fe3bd0ac670c603bf74\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'c:\users\pc\AppData\Local\Temp\pip-record-uor8hprs\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\pc\miniconda3\envs\fastai\Include\shap'
         cwd: c:\users\pc\AppData\Local\Temp\pip-install-hejpe705\shap_2eff441acfc04fe3bd0ac670c603bf74\
    Complete output (67 lines):
    running install
    running build
    running build_py
    creating build
    creating build\lib.win-amd64-3.8
    creating build\lib.win-amd64-3.8\shap
    copying shap\common.py -> build\lib.win-amd64-3.8\shap
    copying shap\datasets.py -> build\lib.win-amd64-3.8\shap
    copying shap\__init__.py -> build\lib.win-amd64-3.8\shap
    creating build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\additive.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\bruteforce.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\explainer.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\gradient.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\kernel.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\linear.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\mimic.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\partition.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\permutation.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\pytree.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\sampling.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\tf_utils.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\tree.py -> build\lib.win-amd64-3.8\shap\explainers
    copying shap\explainers\__init__.py -> build\lib.win-amd64-3.8\shap\explainers
    creating build\lib.win-amd64-3.8\shap\explainers\other
    copying shap\explainers\other\coefficent.py -> build\lib.win-amd64-3.8\shap\explainers\other
    copying shap\explainers\other\lime.py -> build\lib.win-amd64-3.8\shap\explainers\other
    copying shap\explainers\other\maple.py -> build\lib.win-amd64-3.8\shap\explainers\other
    copying shap\explainers\other\random.py -> build\lib.win-amd64-3.8\shap\explainers\other
    copying shap\explainers\other\treegain.py -> build\lib.win-amd64-3.8\shap\explainers\other
    copying shap\explainers\other\__init__.py -> build\lib.win-amd64-3.8\shap\explainers\other
    creating build\lib.win-amd64-3.8\shap\explainers\deep
    copying shap\explainers\deep\deep_pytorch.py -> build\lib.win-amd64-3.8\shap\explainers\deep
    copying shap\explainers\deep\deep_tf.py -> build\lib.win-amd64-3.8\shap\explainers\deep
    copying shap\explainers\deep\__init__.py -> build\lib.win-amd64-3.8\shap\explainers\deep
    creating build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\bar.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\colorconv.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\colors.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\decision.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\dependence.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\embedding.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\force.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\force_matplotlib.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\image.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\monitoring.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\partial_dependence.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\summary.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\text.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\waterfall.py -> build\lib.win-amd64-3.8\shap\plots
    copying shap\plots\__init__.py -> build\lib.win-amd64-3.8\shap\plots
    creating build\lib.win-amd64-3.8\shap\benchmark
    copying shap\benchmark\experiments.py -> build\lib.win-amd64-3.8\shap\benchmark
    copying shap\benchmark\measures.py -> build\lib.win-amd64-3.8\shap\benchmark
    copying shap\benchmark\methods.py -> build\lib.win-amd64-3.8\shap\benchmark
    copying shap\benchmark\metrics.py -> build\lib.win-amd64-3.8\shap\benchmark
    copying shap\benchmark\models.py -> build\lib.win-amd64-3.8\shap\benchmark
    copying shap\benchmark\plots.py -> build\lib.win-amd64-3.8\shap\benchmark
    copying shap\benchmark\__init__.py -> build\lib.win-amd64-3.8\shap\benchmark
    creating build\lib.win-amd64-3.8\shap\plots\resources
    copying shap\plots\resources\bundle.js -> build\lib.win-amd64-3.8\shap\plots\resources
    copying shap\plots\resources\logoSmallGray.png -> build\lib.win-amd64-3.8\shap\plots\resources
    copying shap\tree_shap.h -> build\lib.win-amd64-3.8\shap
    running build_ext
    numpy.get_include() c:\users\pc\miniconda3\envs\fastai\lib\site-packages\numpy\core\include
    building 'shap._cext' extension
    error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
    ----------------------------------------
  Rolling back uninstall of shap
  Moving to c:\users\pc\miniconda3\envs\fastai\lib\site-packages\shap-0.39.0.dist-info\
   from c:\users\pc\miniconda3\envs\fastai\Lib\site-packages\~hap-0.39.0.dist-info
  Moving to c:\users\pc\miniconda3\envs\fastai\lib\site-packages\shap\
   from c:\users\pc\miniconda3\envs\fastai\Lib\site-packages\~hap
ERROR: Command errored out with exit status 1: 'c:\users\pc\miniconda3\envs\fastai\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\pc\\AppData\\Local\\Temp\\pip-install-hejpe705\\shap_2eff441acfc04fe3bd0ac670c603bf74\\setup.py'"'"'; __file__='"'"'C:\\Users\\pc\\AppData\\Local\\Temp\\pip-install-hejpe705\\shap_2eff441acfc04fe3bd0ac670c603bf74\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'c:\users\pc\AppData\Local\Temp\pip-record-uor8hprs\install-record.txt' --single-version-externally-managed --compile --install-headers 'c:\users\pc\miniconda3\envs\fastai\Include\shap' Check the logs for full command output.

Testme

This was sent from Google Colaboratory

can't import fastinference.onnx

Hi @muellerzr

I am running fastinference==0.0.32, fastai==2.1.9 and fastcore==1.3.12.

When I try to import from fastinference.onnx import * to use fastONNX class I get this error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-10-52ff53f82e02> in <module>
----> 1 from fastinference.onnx import *

~/.virtualenvs/challenge/lib/python3.6/site-packages/fastinference/onnx.py in <module>
     18 # Cell
     19 #export
---> 20 from .inference.inference import _decode_loss
     21 
     22 # Cell

~/.virtualenvs/challenge/lib/python3.6/site-packages/fastinference/inference/__init__.py in <module>
----> 1 from .inference import *
      2 from .text import *

~/.virtualenvs/challenge/lib/python3.6/site-packages/fastinference/inference/inference.py in <module>
     52 @patch
     53 def get_preds(self:Learner, ds_idx=1, dl=None, with_input=False, with_decoded=False, with_loss=False, raw=False, act=None,
---> 54                 inner=False, reorder=True, cbs=None, **kwargs):
     55     if dl is None: dl = self.dls[ds_idx].new(shuffled=False, drop_last=False)
     56     if reorder and hasattr(dl, 'get_idxs'):

~/.virtualenvs/challenge/lib/python3.6/site-packages/fastcore/meta.py in _f(f)
    114         to_f = getattr(to_f,'__func__',to_f)
    115         if hasattr(from_f,'__delwrap__'): return f
--> 116         sig = inspect.signature(from_f)
    117         sigd = dict(sig.parameters)
    118         k = sigd.pop('kwargs')

/usr/lib/python3.6/inspect.py in signature(obj, follow_wrapped)
   3063 def signature(obj, *, follow_wrapped=True):
   3064     """Get a signature object for the passed callable."""
-> 3065     return Signature.from_callable(obj, follow_wrapped=follow_wrapped)
   3066 
   3067 

/usr/lib/python3.6/inspect.py in from_callable(cls, obj, follow_wrapped)
   2813         """Constructs Signature for the given callable object."""
   2814         return _signature_from_callable(obj, sigcls=cls,
-> 2815                                         follow_wrapper_chains=follow_wrapped)
   2816 
   2817     @property

/usr/lib/python3.6/inspect.py in _signature_from_callable(obj, follow_wrapper_chains, skip_bound_arg, sigcls)
   2191 
   2192     if not callable(obj):
-> 2193         raise TypeError('{!r} is not a callable object'.format(obj))
   2194 
   2195     if isinstance(obj, types.MethodType):

TypeError: None is not a callable object

When I use fastai==2.1.5 and fastcore==1.3.2 this problem doesn't exist, but not using the updated fastai would cause other problems when running fastai interpretations eginterp.plot_top_losses.

Fix ONNX issues

  1. Missing an import for inspect
  2. ONNX-CPU needs to install just basic onnxruntime

TypeError: requires_grad_() takes 1 positional argument but 2 were given

Here's the code I'm using

from fastai.text.all import *  
from fastinference.inference import *   
learn_class=load_learner('/content/gdrive/MyDrive/TuesdayChatbot/classifier_model-'+ver+'.pkl')   
test=["In the article there were several things left out."]   
learn_class.predict(test)   
learn_class.intrinsic_attention(test[0])   
TypeError                                 Traceback (most recent call last)
<ipython-input-76-ba2c19ef8949> in <module>()
----> 1 learn_class.intrinsic_attention(test[0])

1 frames
/usr/local/lib/python3.6/dist-packages/fastinference/inference/text.py in _intrinsic_attention(learn, text, class_id)
    149     dl = learn.dls.test_dl([text])
    150     batch = next(iter(dl))[0]
--> 151     emb = learn.model[0].module.encoder(batch).detach().requires_grad_(True)
    152     lstm = learn.model[0].module(emb, True)
    153     learn.model.eval()

TypeError: requires_grad_() takes 1 positional argument but 2 were given

ModuleAttributeError: 'SequentialRNN' object has no attribute 'intrinsic_attention'

I'm running a classifer using AWD_LSTM on fastai2 in Colab. Fastinference is ver .38 installed off of github with pip.
The code:
learn_class.predict(test)
('FALSE', tensor(0), tensor([0.9983, 0.0017]))
learn_class.intrinsic_attention(test)
ModuleAttributeError Traceback (most recent call last)
in ()
----> 1 learn_class.intrinsic_attention(test)

1 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in getattr(self, name)
777 return modules[name]
778 raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
--> 779 type(self).name, name))
780
781 def setattr(self, name: str, value: Union[Tensor, 'Module']) -> None:

ModuleAttributeError: 'SequentialRNN' object has no attribute 'intrinsic_attention'

"libcublas.so.10: cannot open shared object file: No such file or directory"

When running from fastinference.inference import * a error is thrown:

"OSError: libcublas.so.10: cannot open shared object file: No such file or directory"

The package was installed with $ pip install fastinference.

When running $ pip show fastinference this is the output:


Name: fastinference
Version: 0.0.25
Summary: A collection of inference modules
Home-page: https://github.com/muellerzr/fastinference/tree/master/
Author: Zachary Mueller
Author-email: [email protected]
License: Apache Software License 2.0
Location: /home/lucas/anaconda3/lib/python3.7/site-packages
Requires: fastai

Running nvidia-smi returns:


CUDA Version: 11.0

Returning N top predictions

Hi,
I have a use case where I need multiple predictions and their probabilities - if the model is not confident enough in the prediction, the user gets a choice of N top predictions to choose the correct one themselves.

I've modified the fastinference code to implement this functionality (at the moment I just return a sorted list of all classes and their probabilities). Would you be interested in having it as a pull request? I haven't measured the speed and the code is a bit hacky at the moment, so I would need to clean it up first and integrate with the original functionality.

feature importance returns a ModuleAttributeError

I start by importing

from fastinference.tabular import *
from fastai.tabular.all import *

and I execute the tutorial in the tabular-interpretation section line by line. When I get to this line fi = learn.feature_importance(df=df), I get the following error

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.