Git Product home page Git Product logo

ncnn-models's Introduction

NCNN Models

The collection of pre-trained AI models, and how they were converted, deployed. 中文

About

The ncnn framework enables cross-device deployment with the help of the vulkan api. We pre-train models via pytorch, tensorflow, paddle etc. and then convert them to ncnn models for final deployment on Windows, mac, linux, android, ios, WebAssembly and uni-app. However, model conversion is not a one-click process and needs to be handled manually. In order to extend the boundary applications of ncnn, we have created this repository to receive any cases of successful or failed conversions.

How to contribute

Contribute tutorial

✅ : good to work ❌ : bad to work ⭕ : good to work, but not good to contribute 🤔 : not sure, but good to contribute 🔥🔥💥

Ncnn Models

We believe we will succeed in the end. 😂。

Model Year Size From Type Convert NCNN Hot
roop 2023 276.7M Onnx face_swap 🤔 🔥
nerf 2023 0.1MB Pytorch instant-ngp
codeformer 2023 212.5M Pytorch face_restoration 🔥
vits 2022 91MB Pytorch tts 🔥
stablediffusion 2022 1.7GB Pytorch diffusion 🔥
sherpa 2022 134MB Pytorch tts 🔥
DTLN 2022 1.9MB Pytorch audio_denoising
gpt2-chinese 2022 357MB nlp
MAT 2022 Pytorch image_inpainting
RVM 2021 13.6MB Pytorch image_matting
vitea 2022 52.9MB Pytorch image_matting
AnimeGanV3 2022 Onnx style_transfer
HybridNets 2022 Pytorch object_dection
yolop 2021 Pytorch object_dection 🤔 💥
pfld 2019 4.9MB Pytorch face_dection
Anime 2021 18.8MB Onnx face_dection
CaiT 2021 34.3MB Pytorch image_classification
FastestDet 2022 0.4MB Pytorch object_dection 💥
yolov7 2022 12.1MB Pytorch object_dection
yolov6 2022 8.4MB Pytorch object_dection
yolov5 2021 2.3MB Pytorch object_dection 💥
yolo-fastestv2 2021 0.4MB Pytorch object_dection 💥
yolox 2021 1.7MB Pytorch object_dection
nanodet 2020 2.3MB Onnx object_dection
DenseNet 2018 21.5MB Pytorch image_classification
resnet18 2015 22.8MB Pytorch image_classification
mobilenet_v2 2019 6.8MB Pytorch image_classification
mobilenet_v3 2019 10.7MB Pytorch image_classification
Res2Net 2021 88.2MB Pytorch image_classification
Res2Next50 2021 48.1MB Pytorch image_classification
shufflenetv2 2018 4.4MB Onnx image_classification
vgg16 2015 263MB Pytorch image_classification
efficientnet 2021 10.3MB Pytorch image_classification
deeplabv3 2017 21.5MB Pytorch image_matting
yolov7-mask 2022 86.6MB Pytorch image_matting 🤔
deoldify 2019 242MB Onnx image_inpainting 🤔
UltraFace 2019 0.6MB Pytorch face_dection
Anime2Real 2022 22.2MB Pytorch style_transfer
AnimeGanV2 2020 4.2MB Pytorch style_transfer
styletransfer 2016 3.2MB Onnx style_transfer
ifrnet 2022 5.6MB Pytorch video_frame_interpolation
Rife 2021 10MB Onnx video_frame_interpolation
GFPGAN 2021 214MB Onnx face_restoration 💥

Awesome App based on Ncnn

1. Deep Face Live

see DeepFaceLive

2. video-super-resolution

waifu2x-ncnn-vulkan、realcugan-ncnn-vulkan、realEsrgan-ncnn-vulkan ...

see RealESRGAN

3. Video Matting

see MODNet

4. BlazePose

see BlazePose

5. AnimeGanV2

see AnimeGanV2

6. GPT2-ChineseChat-NCNN

see GPT2-ChineseChat-NCNN

QQ 群

  • 824562395 【加群请备注你正在转换的新模型(2022 至今)

Star History

Star History Chart

ncnn-models's People

Contributors

baiyuetribe avatar fire avatar hariag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ncnn-models's Issues

android yolov7 model load fail

D/vulkan: searching for layers in '/data/app/com.tencent.yolov7test-VQ5vdVed2_ZMl_KkC1Pr2w==/lib/arm64'
D/vulkan: searching for layers in '/system/fake-libs64'
D/vulkan: searching for layers in '/data/app/com.tencent.yolov7test-VQ5vdVed2_ZMl_KkC1Pr2w==/base.apk!/lib/arm64-v8a'
E/vulkan: invalid vkGetInstanceProcAddr(VK_NULL_HANDLE, "vkEnumerateInstanceVersion") call
I/cent.yolov7test: type=1400 audit(0.0:22277): avc: denied { open } for path="/dev/__properties__/u:object_r:tee_supplicant_prop:s0" dev="tmpfs" ino=15850 scontext=u:r:untrusted_app_25:s0:c512,c768 tcontext=u:object_r:tee_supplicant_prop:s0 tclass=file permissive=1
I/cent.yolov7test: type=1400 audit(0.0:22278): avc: denied { getattr } for path="/dev/__properties__/u:object_r:tee_supplicant_prop:s0" dev="tmpfs" ino=15850 scontext=u:r:untrusted_app_25:s0:c512,c768 tcontext=u:object_r:tee_supplicant_prop:s0 tclass=file permissive=1
I/cent.yolov7test: type=1400 audit(0.0:22279): avc: denied { open } for path="/dev/__properties__/u:object_r:firstboot_prop:s0" dev="tmpfs" ino=15862 scontext=u:r:untrusted_app_25:s0:c512,c768 tcontext=u:object_r:firstboot_prop:s0 tclass=file permissive=1
I/cent.yolov7test: type=1400 audit(0.0:22285): avc: denied { open } for path="/dev/__properties__/u:object_r:bluetooth_prop:s0" dev="tmpfs" ino=15894 scontext=u:r:untrusted_app_25:s0:c512,c768 tcontext=u:object_r:bluetooth_prop:s0 tclass=file permissive=1
I/cent.yolov7test: type=1400 audit(0.0:22286): avc: denied { getattr } for path="/dev/__properties__/u:object_r:bluetooth_prop:s0" dev="tmpfs" ino=15894 scontext=u:r:untrusted_app_25:s0:c512,c768 tcontext=u:object_r:bluetooth_prop:s0 tclass=file permissive=1
I/mali_so: [File] : hardware/arm/maliT760/driver/product/base/src/mali_base_kbase.c; [Line] : 876; [Func] : base_context_deal_with_version_affairs_rk_ext;
    arm_release_ver of this mali_so is 'r18p0-01rel0', rk_so_ver is '8@0'.
D/mali_so: [File] : hardware/arm/maliT760/driver/product/base/src/mali_base_kbase.c; [Line] : 881; [Func] : base_context_deal_with_version_affairs_rk_ext;
    current process is NOT sf, to bail out.
W/ncnn: [0 Mali-T860]  queueC=0[2]  queueG=0[2]  queueT=0[2]
W/ncnn: [0 Mali-T860]  bugsbn1=0  bugbilz=0  bugcopc=0  bugihfa=1
W/ncnn: [0 Mali-T860]  fp16-p/s/a=1/0/1  int8-p/s/a=1/0/0
W/ncnn: [0 Mali-T860]  subgroup=16  basic=0  vote=0  ballot=0  shuffle=0
I/mali_so: [File] : hardware/arm/maliT760/driver/product/base/src/mali_base_kbase.c; [Line] : 876; [Func] : base_context_deal_with_version_affairs_rk_ext;
    arm_release_ver of this mali_so is 'r18p0-01rel0', rk_so_ver is '8@0'.
D/mali_so: [File] : hardware/arm/maliT760/driver/product/base/src/mali_base_kbase.c; [Line] : 881; [Func] : base_context_deal_with_version_affairs_rk_ext;
    current process is NOT sf, to bail out.
I/zygote64: Do partial code cache collection, code=12KB, data=20KB
I/zygote64: After code cache collection, code=12KB, data=20KB
I/zygote64: Increasing code cache capacity to 128KB
I/zygote64: Do partial code cache collection, code=12KB, data=39KB
I/zygote64: After code cache collection, code=12KB, data=39KB
I/zygote64: Increasing code cache capacity to 256KB
I/zygote64: Compiler allocated 7MB to compile void android.widget.TextView.<init>(android.content.Context, android.util.AttributeSet, int, int)

MODNet的ncnn model

你好请问你在MODNet-GUI
这个项目中使用的ncnn模型吗?还是Pytorch模型,如果是可以提供一下ncnn模型吗?
我根据原项目提供的方法已经把Pytorch模型转成了onnx模型,在使用onnx2ncnn转换成ncnn的时候报shap不支持错误,在使用了onnxsim裁减了以后发现shap报错还在,这是我目前的进度

GFPGAN

I tried using your model
encoder.param
encoder.bin
style.bin
And got a result that looks a little worse than the result from
Python GFPGAN and GFPGANv1.3.pth model
result
2

l

ADD AMT

https://github.com/MCG-NKU/AMT
AMT is a lightweight, fast, and accurate algorithm for Frame Interpolation. It aims to provide practical solutions for video generation from a few given frames (at least two frames).

YOLOv7 anchor free model fails

Hello! I'm trying to run YOLOv7 anchor free model using YOLOv6 functions for grid and stride calculation. Here are fragments from my .cpp file:

static void generate_grids_and_stride(const int target_w, const int target_h, std::vector<int>& strides, std::vector<GridAndStride>& grid_strides)
{
    for (auto stride : strides)
    {
        int num_grid_w = target_w / stride;
        int num_grid_h = target_h / stride;
        for (int g1 = 0; g1 < num_grid_h; g1++)
        {
            for (int g0 = 0; g0 < num_grid_w; g0++)
            {
                GridAndStride gs;
                gs.grid0 = g0;
                gs.grid1 = g1;
                gs.stride = stride;
                grid_strides.push_back(gs);
            }
        }
    }
}

static void generate_proposals(std::vector<GridAndStride> grid_strides, const ncnn::Mat& feat_blob, float prob_threshold, std::vector<Object>& objects) {
    const int num_grid = feat_blob.h;
    fprintf(stderr, "output height: %d, width: %d, channels: %d, dims:%d\n", feat_blob.h, feat_blob.w, feat_blob.c, feat_blob.dims);

    const int num_anchors = grid_strides.size();

    const int num_class = feat_blob.c / num_anchors - 5;

    const float* feat_ptr = feat_blob.channel(0);

    for (int anchor_idx = 0; anchor_idx < num_anchors; anchor_idx++)
    {
        __android_log_print(ANDROID_LOG_DEBUG, "yolov7-custom", "anchor_idx %d", anchor_idx);

        const int grid0 = grid_strides[anchor_idx].grid0;

        __android_log_print(ANDROID_LOG_DEBUG, "yolov7-custom", "grid0 %d", grid0);
        const int grid1 = grid_strides[anchor_idx].grid1;

        __android_log_print(ANDROID_LOG_DEBUG, "yolov7-custom", "grid1 %d", grid1);
        const int stride = grid_strides[anchor_idx].stride;

        __android_log_print(ANDROID_LOG_DEBUG, "yolov7-custom", "stride %d", stride);

        // yolox/models/yolo_head.py decode logic
        //  outputs[..., :2] = (outputs[..., :2] + grids) * strides
        //  outputs[..., 2:4] = torch.exp(outputs[..., 2:4]) * strides
        float x_center = (feat_ptr[0] + grid0) * stride;
        float y_center = (feat_ptr[1] + grid1) * stride;
        float w = exp(feat_ptr[2]) * stride;
        float h = exp(feat_ptr[3]) * stride;
        float x0 = x_center - w * 0.5f;
        float y0 = y_center - h * 0.5f;

        float box_objectness = feat_ptr[4];
        for (int class_idx = 0; class_idx < num_class; class_idx++)
        {
            float box_cls_score = feat_ptr[5 + class_idx];
            float box_prob = box_objectness * box_cls_score;
            if (box_prob > prob_threshold)
            {
                Object obj;
                obj.rect.x = x0;
                obj.rect.y = y0;
                obj.rect.width = w;
                obj.rect.height = h;
                obj.label = class_idx;
                obj.prob = box_prob;

                objects.push_back(obj);
            }

        } // class loop
        feat_ptr += feat_blob.w;

    } // point anchor loop
}

int Yolo::detect(const cv::Mat &rgb, std::vector<Object> &objects, float prob_threshold,
                 float nms_threshold) {
    int img_w = rgb.cols;
    int img_h = rgb.rows;

    // letterbox pad to multiple of 32
    int w = img_w;
    int h = img_h;
    float scale = 1.f;
    if (w > h)
    {
        scale = (float)target_size / w;
        w = target_size;
        h = h * scale;
    }
    else
    {
        scale = (float)target_size / h;
        h = target_size;
        w = w * scale;
    }

    ncnn::Mat in = ncnn::Mat::from_pixels_resize(rgb.data, ncnn::Mat::PIXEL_RGB, img_w, img_h, w, h);

    // pad to target_size rectangle
    // yolov5/utils/datasets.py letterbox
    int wpad = (w + 31) / 32 * 32 - w;
    int hpad = (h + 31) / 32 * 32 - h;
    ncnn::Mat in_pad;
    ncnn::copy_make_border(in, in_pad, 0, hpad, 0, wpad, ncnn::BORDER_CONSTANT, 114.f);

    // so for 0-255 input image, rgb_mean should multiply 255 and norm should div by std.
    // new release of yolox has deleted this preprocess,if you are using new release please don't use this preprocess.
    in_pad.substract_mean_normalize(0, norm_vals);

    ncnn::Extractor ex = yolo.create_extractor();

    ex.input("images", in_pad);

    std::vector<Object> proposals;

    {
        ncnn::Mat out;
        ex.extract("output0", out);

        std::vector<int> strides = {8, 16, 32}; // might have stride=64
        std::vector<GridAndStride> grid_strides;
        generate_grids_and_stride(in_pad.w, in_pad.h, strides, grid_strides);
        generate_proposals(grid_strides, out, prob_threshold, proposals);
    }

    // sort all proposals by score from highest to lowest
    qsort_descent_inplace(proposals);

    // apply nms with nms_threshold
    std::vector<int> picked;
    nms_sorted_bboxes(proposals, picked, nms_threshold);

    int count = picked.size();

    objects.resize(count);
    for (int i = 0; i < count; i++)
    {
        objects[i] = proposals[picked[i]];

        // adjust offset to original unpadded
        float x0 = (objects[i].rect.x) / scale;
        float y0 = (objects[i].rect.y) / scale;
        float x1 = (objects[i].rect.x + objects[i].rect.width) / scale;
        float y1 = (objects[i].rect.y + objects[i].rect.height) / scale;

        // clip
        x0 = std::max(std::min(x0, (float)(img_w - 1)), 0.f);
        y0 = std::max(std::min(y0, (float)(img_h - 1)), 0.f);
        x1 = std::max(std::min(x1, (float)(img_w - 1)), 0.f);
        y1 = std::max(std::min(y1, (float)(img_h - 1)), 0.f);

        objects[i].rect.x = x0;
        objects[i].rect.y = y0;
        objects[i].rect.width = x1 - x0;
        objects[i].rect.height = y1 - y0;
    }

    return 0;
}

The code randomly fails (some iterations go well) on generate_proposals somewhere in x_center calculation. Could not understand the reason why it fails. Can someone help?

RVM model

Hello
How are you?
Thanks for contributing to this project.
I am going to use NCNN model for RVM (robust video matting).
I tried to download from the url that u mentioned here but there is NOT the model.
Could u help me?

MODNet的ncnn model

你好请问你在MODNet-GUI
这个项目中使用的ncnn模型吗?还是Pytorch模型,如果是可以提供一下ncnn模型吗?
我根据原项目提供的方法已经把Pytorch模型转成了onnx模型,在使用onnx2ncnn转换成ncnn的时候报shap不支持错误,在使用了onnxsim裁减了以后发现shap报错还在,这是我目前的进度

好的提问也是一种贡献开源项目的方式

比如:

  • 新增模型需求:
    • 建议新增XXX项目(会加速优质文献及时转为全平台可部署的ncnn模型)
  • 问题反馈:
    • xxx复现失败?或为什么?
  • 算法相关:
    • 比如xxx算法跟文献不一样

A good question is also a way to contribute to open source projects。You can ask questions in any language you like

关于yolov7模型输入输出名称的问题

大佬你好,我看了你yolov7 ncnn推理的代码,你的输入输出名称是"in0"、“out0”、“out1”、“out2”,请问你是怎么设置的?我执行python export.py导出的模型名称是"images"和“output”,还有两个输出我不知道叫啥名称。。。

请问你export.py这个脚本是不是漏了啥代码,没有设置输入输出的名称?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.