Git Product home page Git Product logo

tfjs-wechat's Introduction

TensorFlow.js 微信小程序插件

TensorFlow.js是谷歌开发的机器学习开源项目,致力于为javascript提供具有硬件加速的机器学习模型训练和部署。 TensorFlow.js 微信小程序插件封装了TensorFlow.js库,用于提供给第三方小程序调用。 例子可以看TFJS Mobilenet 物体识别小程序

添加插件

在使用插件前,首先要在小程序管理后台的“设置-第三方服务-插件管理”中添加插件。开发者可登录小程序管理后台,通过 appid [wx6afed118d9e81df9] 查找插件并添加。本插件无需申请,添加后可直接使用。

引入插件代码包

使用插件前,使用者要在 app.json 中声明需要使用的插件,例如:

代码示例:

{
  ...
  "plugins": {
    "tfjsPlugin": {
      "version": "0.0.6",
      "provider": "wx6afed118d9e81df9"
    }
  }
  ...
}

引入TensorFlow.js npm

TensorFlow.js 最新版本是以npm包的形式发布,小程序需要使用npm或者yarn来载入TensorFlow.js npm包。也可以手动修改 package.json 文件来加入。

TensorFlow.js v2.0 有一个联合包 - @tensorflow/tfjs,包含了六个分npm包:

  • tfjs-core: 基础包
  • tfjs-converter: GraphModel 导入和执行包
  • tfjs-layers: LayersModel 创建,导入和执行包
  • tfjs-backend-webgl: webgl 后端
  • tfjs-backend-cpu: cpu 后端
  • tfjs-data:数据流

对于小程序而言,由于有2M的app大小限制,不建议直接使用联合包,而是按照需求加载分包。

  • 如果小程序只需要导入和运行GraphModel模型的的话,建议至少加入tfjs-core, tfjs-converter, tfjs-backend-webgl 和tfjs-backend-cpu包。这样可以尽量减少导入包的大小。
  • 如果需要创建,导入或训练LayersModel模型,需要再加入 tfjs-layers包。

下面的例子是只用到tfjs-core, tfjs-converter,tfjs-backend-webgl 和tfjs-backend-cpu包。代码示例:

{
  "name": "yourProject",
  "version": "0.0.1",
  "main": "dist/index.js",
  "license": "Apache-2.0",
  "dependencies": {
    "@tensorflow/tfjs-core": "3.5.0",
    "@tensorflow/tfjs-converter": "3.5.0",
    "@tensorflow/tfjs-backend-webgl": "3.5.0"
  }
}

参考小程序npm工具文档如何编译npm包到小程序中。

注意 请从微信小程序开发版Nightly Build更新日志下载最新的微信开发者工具,保证版本号>=v1.02.1907022.

Polyfill fetch 函数

如果需要使用tf.loadGraphModel或tf.loadLayersModel API来载入模型,小程序需要按以下流程填充fetch函数:

  1. 如果你使用npm, 你可以载入fetch-wechat npm 包
{
  "name": "yourProject",
  "version": "0.0.1",
  "main": "dist/index.js",
  "license": "Apache-2.0",
  "dependencies": {
    "@tensorflow/tfjs-core": "3.5.0",
    "@tensorflow/tfjs-converter": "3.5.0",
    "@tensorflow/tfjs-backend-webgl": "3.5.0"
    "fetch-wechat": "0.0.3"
  }
}
  1. 也可以直接拷贝以下文件到你的javascript源目录: https://cdn.jsdelivr.net/npm/[email protected]/dist/fetch_wechat.min.js

在app.js的onLaunch里调用插件configPlugin函数

var fetchWechat = require('fetch-wechat');
var tf = require('@tensorflow/tfjs-core');
var webgl = require('@tensorflow/tfjs-backend-webgl');
var plugin = requirePlugin('tfjsPlugin');
//app.js
App({
  onLaunch: function () {
    plugin.configPlugin({
      // polyfill fetch function
      fetchFunc: fetchWechat.fetchFunc(),
      // inject tfjs runtime
      tf,
      // inject webgl backend
      webgl,
      // provide webgl canvas
      canvas: wx.createOffscreenCanvas()
    });
  }
});

支持模型localStorage缓存

采用localStorage缓存可以减少模型下载耗费带宽和时间。由于微信小程序对于localStorage有10MB的限制,这个方法适用于小于10MB的模型。 步骤如下:

  1. 在app.js中提供localStorageHandler函数.
var fetchWechat = require('fetch-wechat');
var tf = require('@tensorflow/tfjs-core');
var plugin = requirePlugin('tfjsPlugin');
//app.js
App({
  // expose localStorage handler
  globalData: {localStorageIO: plugin.localStorageIO},
  ...
});
  1. 在模型加载时加入localStorageHandler逻辑。
const LOCAL_STORAGE_KEY = 'mobilenet_model';
export class MobileNet {
  private model: tfc.GraphModel;
  constructor() { }

  async load() {

    const localStorageHandler = getApp().globalData.localStorageIO(LOCAL_STORAGE_KEY);
    try {
      this.model = await tfc.loadGraphModel(localStorageHandler);
    } catch (e) {
      this.model =
        await tfc.loadGraphModel(MODEL_URL);
      this.model.save(localStorageHandler);
    }
  }

支持模型保存为用户文件

微信也支持保存模型为文件。同localStorage, 微信小程序对于本地文件也有10MB的限制,这个方法适用于小于10MB的模型。由于最终模型是按 binary 保存,较 localstorage 保存为 base64 string 更为节省空间。

步骤如下:

  1. 在app.js中提供 fileStorageHandler 函数.
var fetchWechat = require('fetch-wechat');
var tf = require('@tensorflow/tfjs-core');
var plugin = requirePlugin('tfjsPlugin');
//app.js
App({
  // expose fileStorage handler
  globalData: {fileStorageIO: plugin.fileStorageIO},
  ...
});
  1. 在模型加载时加入 fileStorageHandler 逻辑。
const FILE_STORAGE_PATH = 'mobilenet_model';
export class MobileNet {
  private model: tfc.GraphModel;
  constructor() { }

  async load() {

    const fileStorageHandler = getApp().globalData.fileStorageIO(
        FILE_STORAGE_PATH, wx.getFileSystemManager());
    try {
      this.model = await tfc.loadGraphModel(fileStorageHandler);
    } catch (e) {
      this.model =
        await tfc.loadGraphModel(MODEL_URL);
      this.model.save(fileStorageHandler);
    }
  }
}

使用 WebAssembly backend

微信小程序在 Android 手机上提供 WebAssembly的支持。TensorFlow.js的WASM backend非常适合在中低端Android手机上使用。 中低端手机的GPU往往相对CPU要弱一些,而WASM backend是跑在CPU上的,这就为中低端手机提供了另一个加速平台。而且WASM的能耗一般会更低。 使用WASM backend需要修改package.json文件:

{
  "name": "yourProject",
  "version": "0.0.1",
  "main": "dist/index.js",
  "license": "Apache-2.0",
  "dependencies": {
    "@tensorflow/tfjs-core": "2.0.0",
    "@tensorflow/tfjs-converter": "2.0.0",
    "@tensorflow/tfjs-backend-wasm": "2.0.0",
    ...
  }
}

然后在app.js中设置 wasm backend, 你可以自行host wasm file以提高下载速度, 下面例子中的 wasmUrl可以替代成你host的URL。

    const info = wx.getSystemInfoSync();
    const wasmUrl = 'https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/wasm-out/tfjs-backend-wasm.wasm';
    const usePlatformFetch = true;
    console.log(info.platform);
    if (info.platform == 'android') {
      setWasmPath(wasmUrl, usePlatformFetch);
      tf.setBackend('wasm').then(() => console.log('set wasm backend'));
    }

注意 WASM backend is broken due to bundle imcompatible with WeChat npm loader, will update here when it is fixed.

注意 由于最新版本的WeChat的OffscreenCanvas会随页面跳转而失效,在app.js的 onLaunch 函数中设置 tfjs 会导致小程序退出或页面跳转之后操作出错。建议在使用tfjs的page的onLoad中调用 configPlugin 函数。 WeChat的12月版本会修复这个问题。

var fetchWechat = require('fetch-wechat');
var tf = require('@tensorflow/tfjs-core');
var plugin = requirePlugin('tfjsPlugin');
//index.js
Page({
  onLoad: function () {
    plugin.configPlugin({
      // polyfill fetch function
      fetchFunc: fetchWechat.fetchFunc(),
      // inject tfjs runtime
      tf,
      // provide webgl canvas
      canvas: wx.createOffscreenCanvas(),
      backendName: 'wechat-webgl-' + Date.now()
    });
    ...
  }
});

组件设置完毕就可以开始使用 TensorFlow.js库的API了。

使用 tfjs-models 模型库注意事项

模型库提供了一系列训练好的模型,方便大家快速的给小程序注入ML功能。模型分类包括

  • 图像识别
  • 语音识别
  • 人体姿态识别
  • 物体识别
  • 文字分类

由于这些API默认模型文件都存储在谷歌云上,直接使用会导致**用户无法直接读取。在小程序内使用模型API时要提供 modelUrl 的参数,可以指向我们在谷歌**的镜像服务器。 谷歌云的base url是 https://storage.googleapis.com, **镜像的base url是https://www.gstaticcnapps.cn 模型的url path是一致的,比如

他们的 URL Path 都是 /tfjs-models/savedmodel/posenet/mobilenet/float/050/model-stride16.json

下面是加载posenet模型的例子:

import * as posenet from '@tensorflow-models/posenet';

const POSENET_URL =
    'https://www.gstaticcnapps.cn/tfjs-models/savedmodel/posenet/mobilenet/float/050/model-stride16.json';

const model = await posenet.load({
  architecture: 'MobileNetV1',
  outputStride: 16,
  inputResolution: 193,
  multiplier: 0.5,
  modelUrl: POSENET_URL
});

tfjs-examples tfjs例子库

tfjs API 使用实例。

版本需求

  • 微信基础库版本 >= 2.7.3
  • 微信开发者工具 >= v1.02.1907022
  • tfjs-core >= 1.5.2
  • tfjs-converter >= 1.5.2 如果使用localStorage模型缓存

注意 在微信开发者工具 v1.02.19070300 中,你需要在通用设置中打开硬件加速,从而在TensorFlow.js中启用WebGL加速。 setting

更新说明

  • 0.0.2 plugin不再映射TensorFlow.js API库,由小程序端提供。
  • 0.0.3 使用offscreen canvas,小程序无需加入plugin component。
  • 0.0.5 修改例子程序使用tfjs分包来降低小程序大小。
  • 0.0.6 支持 tfjs-core版本1.2.7。
  • 0.0.7 允许用户设置webgl backend name, 这可以解决小程序offscreen canvas会失效的问题。
  • 0.0.8 加入localStorage支持,允许小于10M模型在localStorage内缓存。
  • 0.0.9 加入fileSystem支持,允许小于10M模型在local file system内缓存。fixed missing kernel bug.
  • 0.1.0 支持 tfjs版本2.0.x。
  • 0.2.0 支持 tfjs版本3.x。

tfjs-wechat's People

Contributors

cyfeng16 avatar dependabot[bot] avatar embbnux avatar pyu10055 avatar qszhu avatar willin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tfjs-wechat's Issues

Why the model predicted so slow on ios device?

TensorFlow.js 1.7.0
tfjs-wechat plugin 0.0.9
WeChat 7.0.12
WeChat base API 2.10.4
WeChat IDE v1.02.1911180

I tried to run blazeface model on miniprogram,but I found that it predicted very slow on ios,this is my data about predicted time:
IDE:15ms , android(MI 6): 40ms ios(iphone 6s): 460ms

But if I runed the blazeface model on web,like safari, the predicted time was fast.

tfl.loadLayersModule cannot load weight file from server

TensorFlow.js version: 1.2.6
tfjs-wechat plugin version: 0.0.5
WeChat version: 7.0.5
WeChat base API version: 2.8.0
WeChat IDE version: v1.02.1908012 (Nightly)

When I try to load the model on Android or iOS devices (Huawei P10plus, iPhoneXR, Samsung Note8, MI 9SE), model.json can be loaded while the request for group1-shard1of1.bin is always pending. model.json and group1-shard1of1.bin are in the same directory and both of them are accessible.

Screenshot:
image
image

appServiceSDKScriptError & thirdScriptError

To get help from the community, we encourage using Stack Overflow and the tensorflow.js tag.

  • TensorFlow.js version: 1.2.7
  • tfjs-wechat plugin version: 0.0.6
  • WeChat version: latest
  • WeChat base API version: 2.8.0
  • WeChat IDE version: 1.0.2.1910121

Describe the problem or feature request

Demo cannot run

Code to reproduce the bug / link to feature request

appServiceSDKScriptError
Dt.default.OffscreenCanvas is not a constructor; at wx.createOffscreenCanvas
TypeError: Dt.default.OffscreenCanvas is not a constructor
    at eval (debug:///[publib]:1:666498)
    at Wt (debug:///[publib]:1:667132)
    at Object.Lt (debug:///[publib]:1:665198)
    at Object.p.<computed> (debug:///[publib]:1:1184691)
    at Object.eval (debug:///[publib]:1:481381)
    at Function.eval (debug:///[publib]:1:1183029)
    at Object.eval (debug:///[publib]:1:442685)
    at pe.onLaunch (weapp:///app.js:36:26)
    at pe.eval (debug:///[publib]:1:1460657)
    at eval (debug:///[publib]:1:1461219)
errorReport @ debug:///[publib]:1
debug:///[publib]:1 thirdScriptError
Cannot assign to read only property 'btoa' of object '[object Window]';at App lifeCycleMethod onLaunch function
TypeError: Cannot assign to read only property 'btoa' of object '[object Window]'
    at e ([__wxPluginCode__]:1089:45)
    at Object.exports.setupWechatPlatform ([__wxPluginCode__]:1089:1522)
    at Object.exports.configPlugin ([__wxPluginCode__]:1074:148)
    at pe.onLaunch (weapp:///app.js:34:12)
    at pe.eval (debug:///[publib]:1:1460657)
    at eval (debug:///[publib]:1:1461219)
    at new pe (debug:///[publib]:1:1461295)
    at Function.eval (debug:///[publib]:1:1461640)
    at eval (debug:///[publib]:1:1448957)
    at eval (weapp:///app.js:32:1)

QQ20191106-122327@2x

can't use tfjs-backend-wasm 2.0.0

TensorFlow.js 2.0.0
tfjs-wechat plugin 0.10
WeChat 7.0.13
WeChat base API 2.11.3
WeChat IDE stable 1.03.2006090
尝试使用tfjs-backend-wasm时报错:
image
我们点这个问题进去发现在@tensorflow/tfjs-backend-wasm/index.js里是这样写的:
image
这样的写法会导致cast变量undefined,所以tfjs-backend-wasm 2.0.0是不能使用(或者是不能直接使用)。那么我想问问什么时候可以支持2.0.1或者发布一下修改的2.0.0,亦或者我们是否可以使用1.x版本的 tfjs-backend-wasm

no

Thanks for the tfjs-wechat project,then we can use tensorflow in wechat miniprogram!
As a newbie in tensorflow,I tried to use tfjs-wechat in my projects,there are some difficulties I encountered:
1、Why use tfjs-wechat?
I am interested in image classification,I want to find a way to classify some images.
2、How to use tfjs-wechat?
I tried this project mobilenet demo,and found that I need a trained model to use,so the key point is to train myself models.
3、How to train a model?
I found that it was not an easy way for someone who didn't use tensorflow before,I tried to use python coded the train program,and the tained model after converted was very big,more than 10M.
So how to train a model less than 5M in an easy way? I found a good project for newbie:https://github.com/googlecreativelab/teachablemachine-community, teachablemachine is easy to train and export models,but unfortunately I can‘t use teachablemachine in wechat program directly,there need some adaptation to modify.
It‘s not an easy way for newbie to finish this,But I think it will be a good idea to take teachablemachine in tfjs-wechat, I hope you think so!
Thanks!

Dependencies exceed the size 2M limit of the wechat miniprogram

TensorFlow.js version:1.2.2
tfjs-wechat plugin version:0.0.5

Because my model is converted from keras, I need to use tf.loadLayersModel in a single javascript file, the package depends on the following:
"dependencies": {
     "@tensorflow/tfjs-core": "1.2.2",
    "@tensorflow/tfjs-layers": "1.2.2",
     "fetch-wechat": "0.0.3"
   }
But when compiling and preview upload, it will prompt a size limit exceeding 2M.

appServiceSDKScriptError Fail to compile fragment shader;

On Android devices , run ./demo/mobilenet

appServiceSDKScriptError Fail to compile fragment shader;
at api onCameraFrame callback function
Error: Fail to compile fragment shader.

log

TensorFlow.js version 1.2.7
tfjs-wechat plugin version 0.0.6
WeChat version 7.0.9
WeChat base API version 2.9.4
WeChat IDE version 1.02.1911181

Failed to link vertex and fragment shaders

image
if (this.mobilenetModel) { this.mobilenetModel.classify(frame.data, { width: frame.width, height: frame.height }); }
Report an error "Failed to link vertex and fragment shaders" in mobilenet demo

TypeError: Right-hand side of 'instanceof' is not callable

On Android devices:

TypeError: Right-hand side of 'instanceof' is not callable.
at t.fromPixels(https://usr/app-service.js:...)

TensorFlow.js version 1.2.7
tfjs-wechat plugin version 0.0.6
WeChat version 7.0.7(but Android WeChat 7.0.6 and iOS devices are ok)
WeChat base API version 2.7.5
WeChat IDE version 2.7.5

How can I fix this problem?

tfjsPlugin 0.0.8 插件版本不存在

打算使用0.0.8加入的localStorage支持,tfjs-core和tfjs-converter都已经改为1.5.2,但是开发工具报错:

“provider:wx6afed118d9e81df9, version:0.0.8, 插件版本不存在,the version of the app/plugin is not
exist”。

在0.0.7及之前的版本里,plugin.localStorageIO是undefined。能否帮忙解决一下,感谢。

代码跑不通

module "miniprogram_npm/@tensorflow/tfjs-backend-webgl/seedrandom.js" is not defined

tf.enableDebugMode() cause an error

To get help from the community, we encourage using Stack Overflow and the tensorflow.js tag.

TensorFlow.js version: 1.7.2
tfjs-wechat plugin version: 0.0.9
WeChat version: 微信开发者工具
WeChat base API version: 2.10.4
WeChat IDE version: 1.02.2004020
Describe the problem or feature request: calling of tfjs.enableDebugMode makes the tfjs access BOM object, which are not accessible in wechat. that behaviour causes Cannot read property 'userAgent' of undefined

Code to reproduce the bug / link to feature request

the code:

    tf.enableDebugMode()

the error

Uncaught (in promise) thirdScriptError
Cannot read property 'userAgent' of undefined
TypeError: Cannot read property 'userAgent' of undefined
    at Object.evaluationFn (http://127.0.0.1:26467/appservice/miniprogram_npm/@tensorflow/tfjs-core/index.js:17:73072)
    at t.evaluateFlag (http://127.0.0.1:26467/appservice/miniprogram_npm/@tensorflow/tfjs-core/index.js:17:3160)
    at t.get (http://127.0.0.1:26467/appservice/miniprogram_npm/@tensorflow/tfjs-core/index.js:17:2468)
    at t.getNumber (http://127.0.0.1:26467/appservice/miniprogram_npm/@tensorflow/tfjs-core/index.js:17:2546)
    at e.startTimer (http://127.0.0.1:26467/appservice/miniprogram_npm/@tensorflow/tfjs-core/index.js:17:294533)
    at e.runWebGLProgram (http://127.0.0.1:26467/appservice/miniprogram_npm/@tensorflow/tfjs-core/index.js:17:334813)
    at e.uploadToGPU (http://127.0.0.1:26467/appservice/miniprogram_npm/@tensorflow/tfjs-core/index.js:17:337786)
    at http://127.0.0.1:26467/appservice/miniprogram_npm/@tensorflow/tfjs-core/index.js:17:333339
    at Array.map (<anonymous>)
    at e.runWebGLProgram (http://127.0.0.1:26467/appservice/miniprogram_npm/@tensorflow/tfjs-core/index.js:17:332630)

what causes the error(i'm using minified code of tfjs)

navigator.userAgent || navigator.vendor || window.opera

找不到“jasmine”的类型定义文件。

To get help from the community, we encourage using Stack Overflow and the tensorflow.js tag.

TensorFlow.js 0.1.0
tfjs-wechat plugin 0.1.0
WeChat version
WeChat base API version
WeChat IDE version 1.03.201220
Describe the problem or feature request
Code to reproduce the bug / link to feature request

如题,编译下载的mobilenet的例子报错
已经按照readme安装了npm了
帮我看一下,多谢!

Posenet initialization error

TensorFlow.js version:1.2.7
tfjs-wechat plugin version:0.0.5
WeChat version:7.0.6
WeChat base API version:2.8.0
WeChat IDE version:v1.02.1908082
Describe the problem or feature request
Posenet initialization error

platform:windows 10

app.json

"tfjsPlugin": {
      "version": "0.0.5",
      "provider": "wx6afed118d9e81df9"
 }

TypeError: this.fetchFunc is not a function

Code to reproduce the bug / link to feature request

运行tensorflow object detection 训练的模型出错

To get help from the community, we encourage using Stack Overflow and the tensorflow.js tag.

TensorFlow.js version: 1.2.2
tfjs-wechat plugin version: 0.0.5
WeChat version: 7.0.5
WeChat base API version: 2.7.4
WeChat IDE version: v1.02.1907152
Describe the problem or feature request

model url: https://github.com/yichuxue/tf_model/blob/master/tf_cat_model/model.json

模型是通过tensorflow object detection 训练
再从saved_model 转过来,google之后没有找到对应的问题,就提了Issues

Code to reproduce the bug / link to feature request
image

failed to compile fragment shader;Error:0:144:'.0':syntx error

To get help from the community, we encourage using Stack Overflow and the tensorflow.js tag.

TensorFlow.js version
tfjs-wechat plugin version : 0.0.7
WeChat version
WeChat base API version :2.10.1
WeChat IDE version
Describe the problem or feature request:all
Code to reproduce the bug / link to feature request

image

image

Cannot load layers model completely in real machine debugging.

tfjs-wechat plugin version: tfjs-backend-cpu": "2.0.1",tfjs-backend-webgl": "2.0.1",tfjs-core": "2.0.1",tfjs-layers": "2.0.1"
WeChat version: IOS-7.0.13
WeChat base API version: 2.11.2
WeChat IDE version v1.02.1911180

The mini program can load layers model in pc-wechat-ide debugging but can't load completely in real machine debugging.
I found the group-shard bin downloading is pending as follows:
image
And console log as follows:
image

I load layers model as follows:

flowerModel = await tfLayers.loadLayersModel('https://6169-ai-painter-7q1db-1302478925.tcb.qcloud.la/model.json?sign=5908e48b1382d81a4f8fbc164d423593&t=1592962790');

I don't known if it's bug of wechat. Can you help me? thanks very much!

same input, devtools works perfectly, but in device did not work

To get help from the community, we encourage using Stack Overflow and the tensorflow.js tag.

TensorFlow.js version: 2.3.0
tfjs-wechat plugin version: 0.1.0
WeChat version: 7.0.18
WeChat base API version: 2.10.1
WeChat IDE version: 1.03.2006090
Describe the problem or feature request
same input, in devtools work perfectly,but in android device ,the output is incorrect
for example, in devtools,the output is
[[[23.666297912597656,921.23193359375,183.60040283203125,927.2606201171875],[15.292533874511719,1537.2786865234375,183.01553344726562,1543.4986572265625],[0.27048492431640625,305.4588928222656,183.98831176757812,312.1147766113281],[1.2694244384765625,1384.604248046875,186.19090270996094,1390.690185546875],[5.957794189453125,1064.868408203125,182.96878051757812,1071.165771484375],[11.017524719238281,224.19439697265625,185.01129150390625,230.95016479492188],[7.940696716308594,953.9198608398438,184.24075317382812,960.0391235351562],[6.575469970703125,688.0530395507812,184.0544891357422,694.9642944335938],[12.370758056640625,138.0717010498047,185.07872009277344,143.78099060058594],[11.984046936035156,832.9068603515625,181.735107421875,838.4112548828125],[0,1473.8946533203125,184.7564697265625,1480.3912353515625],[5.432441711425781,1233.8905029296875,183.02520751953125,1240.6072998046875],[8.294059753417969,795.270263671875,183.12899780273438,801.340087890625],[4.284919738769531,986.360107421875,175.13046264648438,992.528564453125],[16.285057067871094,599.8233642578125,182.50344848632812,606.190673828125],[4.934089660644531,1154.9400634765625,184.24420166015625,1161.1971435546875],[2.3918609619140625,406.9335021972656,181.7635955810547,413.1664123535156],[0.7287826538085938,1411.01318359375,182.46786499023438,1417.558349609375],[8.268196105957031,1626.1695556640625,183.573486328125,1631.8653564453125],[5.026954650878906,1737.0159912109375,186.707763671875,1745.3245849609375],[11.749588012695312,1727.8089599609375,184.548583984375,1736.1741943359375],[39.124656677246094,1713.540771484375,181.75552368164062,1718.755615234375],[20.725425720214844,10.743600845336914,204.57266235351562,16.622875213623047],[0,1310.3511962890625,185.74510192871094,1316.6363525390625],[7.2787017822265625,507.96319580078125,183.15560913085938,513.7721557617188],[7.469696044921875,1658.9466552734375,183.41920471191406,1671.5430908203125]]]

but in android device,the output is
[[[51.03124237060547,950.5,126.74998474121094,954.9999389648438],[51.03124237060547,950.5,126.74998474121094,954.9999389648438],[51.03124237060547,950.5,126.74998474121094,954.9999389648438],[51.03124237060547,950.5,126.74998474121094,954.9999389648438],[51.03124237060547,950.5,126.74998474121094,954.9999389648438],[51.03124237060547,950.5,126.74998474121094,954.9999389648438],[51.03124237060547,950.5,126.74998474121094,954.9999389648438]]]

Code to reproduce the bug / link to feature request
var result = await model.executeAsync({'image_arrays:0':batched},['detections:0']); result = tf.slice(result,[0,0,1],[1,-1,4]);

微信小程序开发工具预测结果正常,手机端预测异常

TensorFlow.js version 1.2.2
tfjs-wechat plugin version 0.0.5
WeChat version 7.0.5
WeChat base API version 2.7.4
WeChat IDE version 1.02.1907052
Describe the problem or feature request
使用tfjs在小程序端调用一个分割模型,在电脑端开发工具上可以得到正确的分割结果,但是在手机端(IOS和安卓)推断结果不正确,尝试了model.predict和model.execute,均存在此问题。另外,wx.createOffscreenCanvas()不支持真机调试
Code to reproduce the bug / link to feature request

How to download model completely?

Hello, I wanna download the model from mirror and load it locally.
Here is what I have:

  1. https://www.gstaticcnapps.cn/tfjs-models/savedmodel/posenet/mobilenet/float/050/model-stride16.json

  2. https://www.gstaticcnapps.cn/tfjs-models/savedmodel/posenet/mobilenet/float/050/group1-shard1of1.bin

Then I put them in same folder of my local http server, when I load the model , it still reports error:
the lenght of Float32Array must be mutiple of 4

So what should I do? do I missed some other files?

Thanks

有时候会卡主webgl的渲染

TensorFlow.js version:2.0.1
tfjs-wechat plugin version:0.10
WeChat version:7.0.13
WeChat base API version:2.11.3
WeChat IDE version:stable 1.03.2006090

我们团队做了一个关于鞋子3D姿态识别的AI,并使其可在IOS微信小程序上以较好的帧率运行,然后我们想接入一个3D绘制的效
果,但发现有时渲染会在AI代码运行后卡主,具体表现为渲染与AI代码依旧正常执行,Webgl渲染区域变得全黑。所以我们又对
tfjs使用与webgl渲染的结合进行了多种测试,甚至发现在iPhone8的测试机上有时只运行了简单的tensor操作与使用webgl渲染相
机帧也会出现webgl渲染卡住的问题,且随着双方功能的复杂度提升在更好的机型上也会出现渲染卡住的问题。
这个问题在不同机型上都呈现着不报错,不必现,console输出正常的情况,让我们在debug时十分困扰,因为猜测不到具体
是什么原因造成的问题。
我们也猜测过是因为webgl内存不足导致的问题,但如果没有被卡主AI与渲染是可以持续执行的,且在消耗很少的操作中也会出现这样的问题,所以我现在的猜测是webgl内存地址在双方调用时发生了冲突。

localStorageIO缓存模型的问题

我使用0.0.8版本的model.save(localStorageHandler)函数时会报错:

Error info Error: APP-SERVICE-SDK:setStorageSync:fail exceed storage max size 10Mb

我用sizeof测量model的大小是1.4M,并且程序里没有其他地方使用过setStorageSync去占用缓存空间。请问可能是什么原因,谢谢。

tf.browser.fromPixels(data) Convert to an empty array

To get help from the community, we encourage using Stack Overflow and the tensorflow.js tag.

TensorFlow.js version:1.2.2
tfjs-wechat plugin version:0.0.5
WeChat version:7.0.5
WeChat base API version:2.7.7
WeChat IDE version:1.02.1907242
Describe the problem or feature request

Print on the phone.
F562D60A4AA591E3D11E8DE2C33AF964

Canvas data cannot be printed on the phone, but it is intercepted on the wechat IDE.
image

However, I can get the data correctly on the Google browser.

Code to reproduce the bug / link to feature request

Error: Cannot get WebGL rendering context, WebGL is disabled.

TensorFlow.js version 1.2.6
tfjs-wechat plugin version 0.0.6
WeChat version 7.0.6
WeChat base API version 2.8.0
WeChat IDE version v1.02.1907300
platform windows10

Error:Cannot get WebGL rendering context,WebGL is disabled.

mini-program can run normally,but it has error messages:

VM1623:1 Initialization of backend wechat-webgl failed
console.warn @ VM1623:1
t.initializeBackend @ index.js:17
(anonymous) @ index.js:17
(anonymous) @ index.js:17
(anonymous) @ index.js:17
(anonymous) @ index.js:17
r @ index.js:17
t.setBackend @ index.js:17
t.setBackend @ index.js:17
t @ VM1680 appservice.js:1089
exports.setupWechatPlatform @ VM1680 appservice.js:1089
exports.configPlugin @ VM1680 appservice.js:1074
onLaunch @ app.js? [sm]:8
(anonymous) @ WAService.js:1
(anonymous) @ WAService.js:1
pe @ WAService.js:1
(anonymous) @ WAService.js:1
(anonymous) @ WAService.js:1
(anonymous) @ app.js? [sm]:6
require @ WAService.js:1
(anonymous) @ VM1845:1
scriptLoaded @ appservice?t=1566032884110:7176
script.onload @ appservice?t=1566032884110:7219
load (async)
loadBabelModule @ appservice?t=1566032884110:7213
window.loadBabelMod @ appservice?t=1566032884110:7227
(anonymous) @ slicedToArray.js:12
VM1623:1 Error: Cannot get WebGL rendering context, WebGL is disabled.
at index.js:17
at Bt (index.js:17)
at new e (index.js:17)
at Object.factory (VM1680 appservice.js:1089)
at t.initializeBackend (index.js:17)
at t. (index.js:17)
at index.js:17
at Object.next (index.js:17)
at index.js:17
at new Promise ()

save models to local storage and load from local

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Loading models from the internet can be slow sometimes.

Describe the solution you'd like
A clear and concise description of what you want to happen.

It would be great if users can save tf models to local storage and local it from local next time when they open the app.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

/

Additional context
Add any other context or screenshots about the feature request here.

There's an attempt here: https://github.com/mogoweb/wechat-tfjs-examples

Estimate function is still running after page closed

The problem is :

  1. I initialize the plugin in App.js and load facemesh models.
  2. First page I created a button named "start". Click this button to go to the second page.
  3. In the second page, it starts to capture frames frame camera and do facemsh estimate in canvas.requestAnimationFrame callback function.
  4. When click the "back" button in the navigation bar, it goes back to the first page, while the requestionAnimationFrame callback function in the second page is still running, and the onUnload function of the second page is not called. Then click "start" again, nothing happens. After a while, onUnload function of the second page is triggered and the app goes to the second page.

In this case, I want to stop doing face detect when I go back to the first page, so I added some codes in the onUnload function, but it seems that it's still running in background and the events on page are blocked (also the events on UI) untill all detect function finishes.

In the editor it works well but on my Andoird phone, it does not ..

I am not sure if it is a problem of the plugin system or wehchat itself ?

Could not use version 0.1.0

To get help from the community, we encourage using Stack Overflow and the tensorflow.js tag.

TensorFlow.js version: 2.01
tfjs-wechat plugin version 0.1.0
WeChat base API version 2.11.2
WeChat IDE version 1.0.3

This new version seems not updated to wechat. We can not include this plugin in wechat. It shows this error message:

provider:wx6afed118d9e81df9, version:0.1.0, 插件版本不存在,the version of the app/plugin is not exist
Error: provider:wx6afed118d9e81df9, version:0.1.0, 插件版本不存在,the version of the app/plugin is not exist
at D:\Program Files (x86)\Tencent\微信web开发者工具\code\package.nw\core.wxvpkg\d670f7f30a19b01584db216c5c3f5a75.js:1:1327
at processTicksAndRejections (internal/process/task_queues.js:85:5)

The images are not compatible with transfer learning models.

To get help from the community, we encourage using Stack Overflow and the tensorflow.js tag.

Hi there!
I have trained a transfer learning mobilenet model with Keras and converted it to model.json with some bin files. When I try to load the model with tfl.loadLayersModel(), it throws this console:
TIM图片20200528200336
It seems like the images that camera collected are sent to the dense layer directly (they haven't gone through Conv layers at the top). Would you please to tell me how to fix it? Thank you very much!

TensorFlow.js version
tfjs-wechat plugin version 0.0.9
WeChat version
WeChat base API version
WeChat IDE version 2.10.4
Describe the problem or feature request
Code to reproduce the bug / link to feature request

Can you add an example of the ssd_mobilenet model?

Is your feature request related to a problem? Please describe.
Can you add an example of the ssd_mobilenet model?

Describe the solution you'd like
Load the ssd_mobilenet model. Make predictions, get scores and bboxes and tags.

Describe alternatives you've considered
no

Additional context
no

how use tf.loadGraphModel in weapp?

i run the tfis-wechat example mobiileNet.
debug console " TypeError: getApp(...).globalData.localStorageIO is not a function".
how load local model in weapp?

TensorFlow.js version 1.5.2
tfjs-wechat plugin version 0.0.7
WeChat version 7.0.10
WeChat base API version 2.10.1
WeChat IDE version

On some Android devices, ssd model predict errors.

TensorFlow.js version: 1.2.6
tfjs-wechat plugin version: 0.0.5
WeChat version: 7.0.5
WeChat base API version: 2.7.7
WeChat IDE version: 1.02.1907242
Describe the problem or feature request

When I tested the SSD model, I found that some Android phones could not be predicted successfully.
Here are some models and results I tested.

Iphone X(IOS 12.3) Result: Success
Iphone 8P(IOS 12.3) Result: Success
Iphone 7P(IOS 12.2) Result: Success
Iphone 7P(IOS 12.3.1) Result: Success
华为荣耀 imagic(Android 9):Success
小米 9(MIUI 10 && Android 9):Result: Error
小米 k20(MIUI 10 && Android 9): Result: Error
锤子坚果 pro(Android 7.1.1): Result: Error
vivi nex (Android ): Result: Error
小米 5X(Android 8.1): Result: Error

Error screenshot:
Screenshot_2019-07-27-12-07-42-943_com tencent mm

Code to reproduce the bug / link to feature request

executeAsync内存泄漏

我的调试方法如下:

  1. 在Page的onLoad函数里调用@tensorflow-models/coco-ssd/index.js里的ObjectDetection.prototype.load加载模型,并在加载前使用console.log(tf.memory())记录内存占用。
  2. onUnload函数里dispose模型,然后使用console.log(tf.memory())记录内存占用。
  3. 除了模型加载和释放,所有其他操作都被注释掉。

我反复打开页面/关闭页面,得到结果如下:

  1. onLoad
    {unreliable: false, numBytesInGPU: 0, numTensors: 0, numDataBuffers: 0, numBytes: 0}
  2. onUnload
    {unreliable: false, numBytesInGPU: 1080000, numTensors: 1, numDataBuffers: 1, numBytes: 1080000}
  3. onLoad
    {unreliable: false, numBytesInGPU: 0, numTensors: 1, numDataBuffers: 1, numBytes: 1080000}
  4. onUnload
    {unreliable: false, numBytesInGPU: 1080000, numTensors: 2, numDataBuffers: 2, numBytes: 2160000}
  5. onLoad
    {unreliable: false, numBytesInGPU: 0, numTensors: 2, numDataBuffers: 2, numBytes: 2160000}
  6. onUnload
    {unreliable: false, numBytesInGPU: 1080000, numTensors: 3, numDataBuffers: 3, numBytes: 3240000}
  7. onLoad
    {unreliable: false, numBytesInGPU: 0, numTensors: 3, numDataBuffers: 3, numBytes: 3240000}
  8. onUnload
    {unreliable: false, numBytesInGPU: 1080000, numTensors: 4, numDataBuffers: 4, numBytes: 4320000}

可以看出每次释放模型后,总是多出1080000未释放的字节。我看到tfjs项目的issues里有人提到executeAsync导致内存泄漏。我测试了一下,感觉也是这里的问题。麻烦帮忙看一下。谢谢。

only work when turn on debug on Andoird

It worked on Dev. tool.
But when I run it on my phone, it only worked when I turned on the dev. window.

image

在开发者工具中一切还正常,但是手机上就出bug,运行不起来。不过在手机上打开“开发调试”又正常,关闭“开发调试”就失败。很奇怪。
我就只是运行了初始化,以及加载blazeface模型

Mobilenet demo how to get a score?

TensorFlow.js version:1.2.6
tfjs-wechat plugin version:0.05
WeChat version:7.0.5
WeChat base API version:2.7.7
WeChat IDE version:v1.02.1908012
Describe the problem or feature request

After running the mobilenet demo, I found his speed was around 80ms.

I used the demo in tfjs-example to run on chrome, and the predicted time is around 80ms, which is in line with my expectation.

However, after running the demo in tfjs-example on wechat, I found that my speed actually reached 800ms. The first time was 800ms, and the second time was about 200ms.

The demo in tfjs-example cannot improve my prediction speed. Is there a better way to improve the prediction speed of the model?

This is the result of tfjs-example on wechat.
image

This is the result on chrome.
image

Code to reproduce the bug / link to feature request

缺少文件报错

TensorFlow.js version 0.1.0
tfjs-wechat plugin version: 0.03
WeChat version
WeChat base API version: 2.15.0
WeChat IDE version:1.05.2102010(R)

用的是#84版本代码,构建之后报错,查看代码是:
miniprogram_npm\@tensorflow\tfjs-backend-wasm\index.js导入这几个文件报错

require('path'), require('fs'), require('worker_threads'), require('perf_hooks')

代码没有改过,根据README操作的,第二部安装了一个 npm add @types/jasmine -D
PS:其中构建的时候,显示构建完成,但是有四个文件``显示没有找到npm入口文件,不过好像和上面的无关...
请问有构建完成的目录结构参考吗?或者是什么地方出了问题?

dispose plugin needed.

There is a function :configplugin to initialize this plugin. But when the page released or app turned off, is there a function to dispose all the resourcesd used ?
If I call the config function in onload method, when I turn to another page close this page and open again, the onload function will be called once more. So this plugin is configed again ? A new offscreen canvas will be created ? How to dispose the previous one ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.