Git Product home page Git Product logo

tfgo's Introduction

tfgo: TensorFlow in Go

GoDoc Build Status

TensorFlow's Go bindings are hard to use: tfgo makes it easy!

No more problems like:

  • Scoping: each new node will have a new and unique name
  • Typing: attributes are automatically converted to a supported type instead of throwing errors at runtime

Also, it uses Method chaining making possible to write pleasant Go code.

Dependencies

  1. TensorFlow-2.9.1 lib. How to install tensorflow.
  2. TensorFlow bindings github.com/galeone/tensorflow. In order to correctly work with TensorFlow 2.9.1 in Go, we have to use a fork I created with some fix for the Go bindings. Bindings can be too large for go mod proxy, so you may want to switch off proxy usage by executing go env -w GONOSUMDB="github.com/galeone/tensorflow" to pull code directly using system installed git. It changes nothing in the user interface -- you can use go modules as usual.

Installation

go get github.com/galeone/tfgo

Getting started

The core data structure of the TensorFlow's Go bindings is the op.Scope struct. tfgo allows creating new *op.Scope that solves the scoping issue mentioned above.

Since we're defining a graph, let's start from its root (empty graph)

root := tg.NewRoot()

We can now place nodes into this graphs and connect them. Let's say we want to multiply a matrix for a column vector and then add another column vector to the result.

Here's the complete source code.

package main

import (
        "fmt"
        tg "github.com/galeone/tfgo"
        tf "github.com/galeone/tensorflow/tensorflow/go"
)

func main() {
        root := tg.NewRoot()
        A := tg.NewTensor(root, tg.Const(root, [2][2]int32{{1, 2}, {-1, -2}}))
        x := tg.NewTensor(root, tg.Const(root, [2][1]int64{{10}, {100}}))
        b := tg.NewTensor(root, tg.Const(root, [2][1]int32{{-10}, {10}}))
        Y := A.MatMul(x.Output).Add(b.Output)
        // Please note that Y is just a pointer to A!

        // If we want to create a different node in the graph, we have to clone Y
        // or equivalently A
        Z := A.Clone()
        results := tg.Exec(root, []tf.Output{Y.Output, Z.Output}, nil, &tf.SessionOptions{})
        fmt.Println("Y: ", results[0].Value(), "Z: ", results[1].Value())
        fmt.Println("Y == A", Y == A) // ==> true
        fmt.Println("Z == A", Z == A) // ==> false
}

that produces

Y:  [[200] [-200]] Z:  [[200] [-200]]
Y == A true
Z == A false

The list of the available methods is available on GoDoc: http://godoc.org/github.com/galeone/tfgo

Computer Vision using data flow graph

TensorFlow is rich of methods for performing operations on images. tfgo provides the image package that allows using the Go bindings to perform computer vision tasks in an elegant way.

For instance, it's possible to read an image, compute its directional derivative along the horizontal and vertical directions, compute the gradient and save it.

The code below does that, showing the different results achieved using correlation and convolution operations.

package main

import (
        tg "github.com/galeone/tfgo"
        "github.com/galeone/tfgo/image"
        "github.com/galeone/tfgo/image/filter"
        "github.com/galeone/tfgo/image/padding"
        tf "github.com/galeone/tensorflow/tensorflow/go"
        "os"
)

func main() {
        root := tg.NewRoot()
        grayImg := image.Read(root, "/home/pgaleone/airplane.png", 1)
        grayImg = grayImg.Scale(0, 255)

        // Edge detection using sobel filter: convolution
        Gx := grayImg.Clone().Convolve(filter.SobelX(root), image.Stride{X: 1, Y: 1}, padding.SAME)
        Gy := grayImg.Clone().Convolve(filter.SobelY(root), image.Stride{X: 1, Y: 1}, padding.SAME)
        convoluteEdges := image.NewImage(root.SubScope("edge"), Gx.Square().Add(Gy.Square().Value()).Sqrt().Value()).EncodeJPEG()

        Gx = grayImg.Clone().Correlate(filter.SobelX(root), image.Stride{X: 1, Y: 1}, padding.SAME)
        Gy = grayImg.Clone().Correlate(filter.SobelY(root), image.Stride{X: 1, Y: 1}, padding.SAME)
        correlateEdges := image.NewImage(root.SubScope("edge"), Gx.Square().Add(Gy.Square().Value()).Sqrt().Value()).EncodeJPEG()

        results := tg.Exec(root, []tf.Output{convoluteEdges, correlateEdges}, nil, &tf.SessionOptions{})

        file, _ := os.Create("convolved.png")
        file.WriteString(results[0].Value().(string))
        file.Close()

        file, _ = os.Create("correlated.png")
        file.WriteString(results[1].Value().(string))
        file.Close()
}

airplane.png

airplane

convolved.png

convolved

correlated.png

correlated

the list of the available methods is available on GoDoc: http://godoc.org/github.com/galeone/tfgo/image

Train in Python, Serve in Go

TensorFlow 2 comes with a lot of easy way to export a computational graph (e.g. Keras model, or a function decorated with @tf.function) to the SavedModel serialization format (that's the only one officially supported).

saved model

Using TensorFlow 2 (with Keras or tf.function) + tfgo, exporting a trained model (or a generic computational graph) and use it in Go is straightforward.

Just dig into the example to understand how to serve a trained model with tfgo.

Python code

import tensorflow as tf

model = tf.keras.Sequential(
    [
        tf.keras.layers.Conv2D(
            8,
            (3, 3),
            strides=(2, 2),
            padding="valid",
            input_shape=(28, 28, 1),
            activation=tf.nn.relu,
            name="inputs",
        ),  # 14x14x8
        tf.keras.layers.Conv2D(
            16, (3, 3), strides=(2, 2), padding="valid", activation=tf.nn.relu
        ),  # 7x716
        tf.keras.layers.Flatten(),
        tf.keras.layers.Dense(10, name="logits"),  # linear
    ]
)

tf.saved_model.save(model, "output/keras")

Go code

package main

import (
        "fmt"
        tg "github.com/galeone/tfgo"
        tf "github.com/galeone/tensorflow/tensorflow/go"
)

func main() {
        // A model exported with tf.saved_model.save()
        // automatically comes with the "serve" tag because the SavedModel
        // file format is designed for serving.
        // This tag contains the various functions exported. Among these, there is
        // always present the "serving_default" signature_def. This signature def
        // works exactly like the TF 1.x graph. Get the input tensor and the output tensor,
        // and use them as placeholder to feed and output to get, respectively.

        // To get info inside a SavedModel the best tool is saved_model_cli
        // that comes with the TensorFlow Python package.

        // e.g. saved_model_cli show --all --dir output/keras
        // gives, among the others, this info:

        // signature_def['serving_default']:
        // The given SavedModel SignatureDef contains the following input(s):
        //   inputs['inputs_input'] tensor_info:
        //       dtype: DT_FLOAT
        //       shape: (-1, 28, 28, 1)
        //       name: serving_default_inputs_input:0
        // The given SavedModel SignatureDef contains the following output(s):
        //   outputs['logits'] tensor_info:
        //       dtype: DT_FLOAT
        //       shape: (-1, 10)
        //       name: StatefulPartitionedCall:0
        // Method name is: tensorflow/serving/predict

        model := tg.LoadModel("test_models/output/keras", []string{"serve"}, nil)

        fakeInput, _ := tf.NewTensor([1][28][28][1]float32{})
        results := model.Exec([]tf.Output{
                model.Op("StatefulPartitionedCall", 0),
        }, map[tf.Output]*tf.Tensor{
                model.Op("serving_default_inputs_input", 0): fakeInput,
        })

        predictions := results[0]
        fmt.Println(predictions.Value())
}

Why?

Thinking about computation represented using graphs, describing computing in this way is, in one word, challenging.

Also, tfgo brings GPU computations to Go and allows writing parallel code without worrying about the device that executes it (just place the graph into the device you desire: that's it!)

Contribute

I love contributions. Seriously. Having people that share your same interests and want to face your same challenges it's something awesome.

If you'd like to contribute, just dig in the code and see what can be added or improved. Start a discussion opening an issue and let's talk about it.

Just follow the same design I use into the image package ("override" the same Tensor methods, document the methods, test your changes, ...)

There are a lot of packages that can be added, like the image package. Feel free to work on a brand new package: I'd love to see this kind of contributions!

TensorFlow installation

Manual

On MacOS you can brew install libtensorflow (assuming you have brew installed. Brew is a package manager. If you need help installing brew follow instructions here: https://docs.brew.sh/Installation )

Download and install the C library from https://www.tensorflow.org/install/lang_c

curl -L "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-linux-x86_64-2.9.1.tar.gz" | sudo tar -C /usr/local -xz
sudo ldconfig

Docker

docker pull tensorflow/tensorflow:2.9.1

Or you can use system package manager.

tfgo's People

Contributors

alimoeeny avatar almendar avatar andrewda avatar deng-xian-sheng avatar dependabot[bot] avatar galeone avatar lidalei avatar luigimarano avatar mauri870 avatar miku avatar ninedraft avatar rots avatar testwill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tfgo's Issues

OCR example

Can you use this package as OCR, is there a simple example that can extract text from image?

How to use the model trained by tf.estimator

For example, when I used tf.estimator training model, input_fn was used in training and feature_column was specified, so there was no name of input node and output node, could you help me?

est = tf.estimator.LinearClassifier(feature_columns=one_hot_feature_columns + crossed_columns,
                                            n_classes=2,
                                            model_dir=model_path,
                                            optimizer=tf.train.FtrlOptimizer(
                                                learning_rate=0.01,
                                                l2_regularization_strength=0.02)
                                            )

You must feed a value for placeholder tensor 'input' with dtype string and shape [?]

My code to create a tensor is:
model := tg.LoadModel("export_golang", []string{"serves_wdl"}, nil) tensorExamStr := [1]string{"34,Private,198693,10th,6,Never-married,Other-service,Not-in-family,White,Male,0,0,30,United-States"} fakeInput, _ := tf.NewTensor(tensorExamStr)

I don't understand why appears this panic since my input's dtype is string and shape is [1]

tf.Variable support

This is related to #23

Why does the API not support tf.Variable? What would it take to add it?

Support for Keras/TF_2.0?

Hello,

I am working in Python and using TensorFlow's Keras API ( e.g. tf.keras.models.Sequential). Would I be able to export it (export_savedmodel) and then load it into Go with TFGO? Or do I need to stick to "pure" TensorFlow expressions?

Also, if I use TensorFlow 2.0, will it work correctly with TFGO?

Thanks!

panic: interface conversion: interface {} is [][][]float32, not [][][]float32

after inference I can not convert interface to slice

var results []*tf.Tensor
results = model.Exec([]tf.Output{
    model.Op("softmax/truediv", 0),
}, map[tf.Output]*tf.Tensor{
    model.Op("the_input", 0): input,
})

fmt.Printf("results type %T \n", results)
fmt.Printf("results[0] type %T \n", results[0])
fmt.Printf("results[0].Shape() %v \n", results[0].Shape())

probabilities := results[0].Value().([][][]float32)

I get

results type []*tensorflow.Tensor 
results[0] type *tensorflow.Tensor 
results[0].Shape() [1 25 38] 
panic: interface conversion: interface {} is [][][]float32, not [][][]float32
<...>

is this a bug or I missed something?

LeNetDropout/softmax_linear/Identity

results := model.Exec([]tf.Output{
model.Op("LeNetDropout/softmax_linear/Identity", 0),

In this line, if we are using our own .pb model, how to find out the identifier for it ?

installation issue: can't find for_core_protos_go_proto

go get github.com/galeone/tfgo
go: finding module for package github.com/tensorflow/tensorflow/tensorflow/go/core/protobuf/for_core_protos_go_proto
go: finding module for package github.com/tensorflow/tensorflow/tensorflow/go/core/protobuf/for_core_protos_go_proto
../../../../.gvm/pkgsets/go1.15/global/pkg/mod/github.com/tensorflow/[email protected]+incompatible/tensorflow/go/saved_model.go:25:2: module github.com/tensorflow/tensorflow@latest found (v2.4.1+incompatible), but does not contain package github.com/tensorflow/tensorflow/tensorflow/go/core/protobuf/for_core_protos_go_proto

Embedding a Python interpreter - Tensorflow Python API support

I'm looking for collaborators for making tfgo a Tensorflow Go API 1:1 Python compatible.

A brief recap:

What is tfgo

tfgo is a wrapper built around the Tensorflow Go bindings that allow:

  • defining graphs using method chaining in order to build graphs exactly as everyone is used to think about them (as a flow that goes from the input to the output)
  • importing a SavedModel and execute it
  • using some high-level operations built upon the low-level Go bindings (e.g. tfgo/image).

It can successfully be used to run input pre-processing + inference or to define dataflow graph directly in Go. See the README.

The problem:

tfgo is not a 1:1 mapping with the Python API since the tensorflow/op package only exposes some of the C++ primitives and the Tensorflow Python API is way more complex and complete of the C++ API.

People are familiar in using Tensorflow in Python and moreover, there are certain objects like the optimizers, the Variables, the Keras layers, the Keras losses, and so on that are defined in Python only.

Therefore tfgo has a limited utility. There have been several people that asked me to add the tf.Variable support (the last one #13 ) since they would be interested in training the models directly in Go and more in general, to have a 1:1 mapping between the Python functionalities and the tfgo functionalities.

The idea

tfgo can execute SavedModel objects and right now we can use Python to define a computational graph even to train a model, exporting a SavedModel that describes the whole training phase and execute it in tfgo (just define the model + the optimizer + the training operation and export this graph).

Wouldn't be cool to make it possible to define the graph from a Go program, but with the complete support of the Python API?

The idea is to embed a Python interpreter (ideally 3, 2 if there are technological constraints) inside tfgo, create a Go API that correctly communicates with the interpreter what the user wants and that builds a SavedModel. After that, tfgo then reads and executes it (exactly like the tf.Session in Python).

e.g. if the user asks for something the Go package can do, just build the Graph using the Go bindings/tfgo - otherwise, use the Python interpreter.

Tensorflow 2.0

The Tensorflow Python API contains the decorator @tf.function Using this decorator it is possible to convert in its graph representation every correctly defined function (that uses the tf. primitives for everything).

This could help a lot in implementing the idea since we can just:

  1. build the Python function body from Go
  2. decorating it using @tf.function
  3. export its graph representation in a SavedModel
  4. load it in tfgo for the execution

Like we were used to working in Tensorflow < 2, with the graph-definition + session execution paradigm.

Thoughts? Suggestions?

installation issue: can't find for_core_protos_go_proto

I'm opening up a new issue since none of the solutions given in the original issue fix the problem at all.

C:\Users\User\go\src\testTensor>go get github.com/galeone/tfgo
package github.com/tensorflow/tensorflow/tensorflow/go/core/protobuf/for_core_protos_go_proto: cannot find package "github.com/tensorflow/tensorflow/tensorflow/go/core/protobuf/for_core_protos_go_proto" in any
of:
        C:\Go\src\github.com\tensorflow\tensorflow\tensorflow\go\core\protobuf\for_core_protos_go_proto (from $GOROOT)
        C:\Users\User\go\src\github.com\tensorflow\tensorflow\tensorflow\go\core\protobuf\for_core_protos_go_proto (from $GOPATH)

I am attempting to be able to run this code, but this repository in it's current state is completely uninstallable.

windows installation error

Hi, I've installed the Tensorflow for C folder in 'C:\libtensorflow 2.6.0' and added this folder to the Path environment variables. Using 'go get github.com/galeone/tfgo' I get the following error:

21b591\attrs.go:20:11: fatal error: tensorflow/c/c_api.h: No such file or directory
   20 | // #include "tensorflow/c/c_api.h"

Is something else still needed?

libtensorflow.so: .dynsym local symbol at index 3 (>= sh_info of 3)

Hello,
I read from the forumtensorflow that it was necessary to use your lib for writtine tensorflow with GOlang.
I was intalled the lib C tensorflow but I had this probleme. Apparently, I have to use the gold linker to compile.

Do you have a solution, for user -fuse-ld=gold with gcc ?

Error: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.

Hi ! I am trying to use tfgo to compute embeddings using universal sentence encoder but facing some error. Not sure if there is some issue in the go code or something else. It seems that the model is loaded successfully. Any help is appreciated

I have downloaded the USE from https://tfhub.dev/google/universal-sentence-encoder/3

Error i am getting is:

model loaded successfully
2022-02-25 17:36:56.393949: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at lookup_table_op.cc:929 : FAILED_PRECONDITION: Table not initialized.
panic: {{function_node __inference_pruned_6740}} {{function_node __inference_pruned_6740}} Table not initialized.
         [[{{node text_preprocessor/hash_table_Lookup/hash_table_Lookup/LookupTableFindV2}}]]
         [[StatefulPartitionedCall/StatefulPartitionedCall]]

goroutine 1 [running]:
github.com/galeone/tfgo.(*Model).Exec(0xc00019c030, 0xc00023fe58, 0x1, 0x1, 0xc00023fe78, 0x0, 0x0, 0x0)
        /Users/pratyushgoel/go/pkg/mod/github.com/galeone/[email protected]/model.go:73 +0xd7
main.main()
        /Users/pratyushgoel/workspace/experiments/tfgo-example/main.go:14 +0x2e5

Process finished with the exit code 2

On running the command saved_model_cli show --all --dir universal-sentence-encoder_3/ I get the results

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is: 

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['inputs'] tensor_info:
        dtype: DT_STRING
        shape: (-1)
        name: serving_default_inputs:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['outputs'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 512)
        name: StatefulPartitionedCall:0
  Method name is: tensorflow/serving/predict

Concrete Functions:
  Function Name: '__call__'
    Option #1
      Callable with:
        Argument #1
          inputs: TensorSpec(shape=<unknown>, dtype=tf.string, name='inputs')

  Function Name: 'init_op'

  Function Name: 'predict'
    Option #1
      Callable with:
        Argument #1
          inputs

GO file looks like this:

package main

import (
	"fmt"
	tf "github.com/galeone/tensorflow/tensorflow/go"
	tg "github.com/galeone/tfgo"
)

func main() {
	model := tg.LoadModel("universal-sentence-encoder_3/", []string{"serve"}, nil)
	fmt.Println("model loaded successfully")

	fakeInput, _ := tf.NewTensor([1]string{"Please compute embedding for this string"})
	results := model.Exec([]tf.Output{
		model.Op("StatefulPartitionedCall", 0),
	}, map[tf.Output]*tf.Tensor{
		model.Op("serving_default_inputs", 0): fakeInput,
	})

	predictions := results[0]
	fmt.Println(predictions.Value())
}

loading model

Hi,

I have a saved_model.pb file in the dir path. However, I'm getting the following error when trying to load the model: Could not find SavedModel .pb or .pbtxt at supplied export directory path. I've saved the model with the tf.saved_model.save function as recommended in Python.

When using the saved_model_cli tool the tag is set to "serve": MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

I also get the following error when running the program:
/usr/bin/ld: /usr/local/lib/libtensorflow.so: .dynsym local symbol at index 3 (>= sh_info of 3)

Any ideas for common causes?

EstimatorServe data param type error

hi @galeone :
When I used this package to make model predictions, I found that you had a writing error.
the code in model.go.

func (model *Model) EstimatorServe(tensors []tf.Output, data map[string][]float32) (results []*tf.Tensor) {
	sequence, err := preprocessor.PythonDictToByteArray(data)
	if err != nil {
		panic(err)
	}
	input_tensor, _ := tf.NewTensor([]string{string(sequence)})
	return model.Exec(tensors, map[tf.Output]*tf.Tensor{
		model.Op("input_example_tensor", 0): input_tensor})
}

data params cannot be defined as map[string][]float32.
the data as the prediction set, we cannot guarantee that all the characteristics of the people use float32. So there preprocessor. Float32ToFeature, preprocessor. StringToFeature, and preprocessor Int64ToFeature.
In theory, the user should choose one of these three types of handling and then use PythonDictToByteArray to serialize and pass in the model, which is normal logic. If I can, I'll try to write a snippet of code today, and we'll talk about it after you've seen it.

How to use TensorFlow's Universal Sentence Encoder

How would I load in the universal-sentence-encoder-large embedding model?

In Python

embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-large/3")
embeddings = embed([
    "The quick brown fox jumps over the lazy dog.",
    "I am a sentence for which I would like to get its embedding"])

print session.run(embeddings)

In GO I've tried

model, err := tf.LoadSavedModel("universal-sentence-encoder-large", []string{"serve"}, nil)

if err != nil {
    fmt.Printf("Error loading saved model: %s\n", err.Error())
    return
}

but the program panics a when trying to load in the model ๐Ÿ˜•

when I use the saved_model_cli I get empty results

The given SavedModel contains the following tag-sets:

How would I use the model?
The directory looks like:

โ”œโ”€โ”€ assets
โ”œโ”€โ”€ saved_model.pb
โ”œโ”€โ”€ tfhub_module.pb
โ””โ”€โ”€ variables

and the data was downloaded and unzipped from https://tfhub.dev/google/universal-sentence-encoder-large/3?tf-hub-format=compressed

Choose which device "cuda:x" or "cpu" to use

After installing libtensorflow that supports both CPU and GPU. How to select which device to run on ?
Thanks in advance

package main

import (
	"fmt"

	tf "github.com/galeone/tensorflow/tensorflow/go"
	tg "github.com/galeone/tfgo"
)

func describe(i interface{}) {
	fmt.Printf("(%v, %T)\n", i, i)
}

func main() {
	model := tg.LoadModel("/home/yafoz/Desktop/go/src/goTensorflow/tfModel", []string{"serve"}, nil)

	fakeInput, _ := tf.NewTensor([2][28][28][1]float32{})
	results := model.Exec([]tf.Output{
		model.Op("StatefulPartitionedCall", 0),
	}, map[tf.Output]*tf.Tensor{
		model.Op("serving_default_inputs_input", 0): fakeInput,
	})

	fmt.Println(len(results))
	for i := 0; i < 1; i++ {
		predictions := results[i]
		dummy := predictions.Value()
		describe(dummy)
		foo, _ := dummy.([][]float32)
		fmt.Println("%V", foo[0][1])
		fmt.Println(predictions.Shape())
		fmt.Println("-----------------")
	}
}

go get return error "cannot find package"

anyone can help please. trying go get
go get github.com/tensorflow/tensorflow/tensorflow/go

getting an error "cannot find package..."

I know that's not the appropriate place to ask but what i'm doing wrong ?

I need to load a h5 model into a golang app and make prediction base on this model

tfgo is built on tensorflow/go but we can't go get it
thanks a lot for any help

retrain.py

Hello,

Do you know if there is any equivalent sample code of retrain.py in golang?

Thanks for help.

how to solve ld: library not found for -ltensorflow

mullerhadoop-1:tensorgo muller$ GO111MODULE=on go get -u -v github.com/galeone/tfgo
go: finding github.com/galeone/tfgo latest
go: downloading github.com/galeone/tfgo v0.0.0-20191209163057-06635831fc69
go: extracting github.com/galeone/tfgo v0.0.0-20191209163057-06635831fc69
github.com/tensorflow/tensorflow/tensorflow/go

github.com/tensorflow/tensorflow/tensorflow/go

ld: library not found for -ltensorflow
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Can't load model

I create model using http://opennmt.net
result of saved_model_cli show

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['length'] tensor_info:
        dtype: DT_INT32
        shape: (-1)
        name: Placeholder_1:0
    inputs['tokens'] tensor_info:
        dtype: DT_STRING
        shape: (-1, -1)
        name: Placeholder:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['alignment'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 4, -1, -1)
        name: seq2seq/decoder/Reshape_7:0
    outputs['length'] tensor_info:
        dtype: DT_INT32
        shape: (-1, 4)
        name: seq2seq/decoder/decoder_1/while/Exit_19:0
    outputs['log_probs'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 4)
        name: seq2seq/decoder/decoder_1/while/Exit_14:0
    outputs['tokens'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 4, -1)
        name: seq2seq/index_to_string_Lookup:0
  Method name is: tensorflow/serving/predict`

But I can't load model

goroutine 1 [running]:
github.com/galeone/tfgo.LoadModel(0x4ce6a9, 0xa, 0xc00003c778, 0x1, 0x1, 0x0, 0x49937f)
        /exwindoz/home/juno/gowork/src/github.com/galeone/tfgo/model.go:35 +0x11d
main.main()
        /exwindoz/home/juno/gowork/src/github.com/remotejob/tensorflow-export-import-example/example.go:13 +0x77
exit status 2

Create mapping from model for model.Exec

Hi,

Is there a way to automatically create the mapping for the Placeholder (e.g. Placeholder_1) to the input name (e.g. Height) from the model?

Also is there a way to automatically read the "prob_out" from the model?

[]tf.Output and map[tf.Output]*tf.Tensor are both used for model.Exec:

height, _ := tf.NewTensor([1]int64{"Height"})
width, _ := tf.NewTensor([1]int64{"Width"})

results := model.Exec([]tf.Output{
    model.Op("prob_out", 0),
}, map[tf.Output]*tf.Tensor{
   model.Op("Placeholder_1", 0): height,
   model.Op("Placeholder_2", 0): width,
})

Thanks.

ambiguous import: found package in multiple modules

I am importing the packages as suggested in the examples:

tf "github.com/galeone/tensorflow/tensorflow/go"
tg "github.com/galeone/tfgo"

and get the following error

main.go:11:2: ambiguous import: found package github.com/galeone/tensorflow/tensorflow/go in multiple modules:
        github.com/galeone/tensorflow v2.3.1+incompatible (~/.gvm/pkgsets/go1.15.8/global/pkg/mod/github.com/galeone/[email protected]+incompatible/tensorflow/go)
        github.com/galeone/tensorflow/tensorflow/go v0.0.0-20210519172502-4018d721b591 (~/.gvm/pkgsets/go1.15.8/global/pkg/mod/github.com/galeone/tensorflow/tensorflow/[email protected])

../../.gvm/pkgsets/go1.15.8/global/pkg/mod/github.com/galeone/[email protected]/ops.go:24:2: ambiguous import: found package github.com/galeone/tensorflow/tensorflow/go/op in multiple modules:
        github.com/galeone/tensorflow v2.3.1+incompatible (~/.gvm/pkgsets/go1.15.8/global/pkg/mod/github.com/galeone/[email protected]+incompatible/tensorflow/go/op)
        github.com/galeone/tensorflow/tensorflow/go v0.0.0-20210519172502-4018d721b591 (~/.gvm/pkgsets/go1.15.8/global/pkg/mod/github.com/galeone/tensorflow/tensorflow/[email protected]/op)

After running go build the relevant section of my go.sum looks like:

github.com/galeone/tensorflow v1.15.4 h1:TQeJss9Aeipg2K6kNwfqKRNpDLLwBdCouRKdCfzQ2pg=
github.com/galeone/tensorflow v2.3.1+incompatible h1:RRiPEbcVK2IghF7YFDDF33tx+XMr2NuCriDBlMWYm5s=
github.com/galeone/tensorflow v2.3.1+incompatible/go.mod h1:tPYvIhe58Qvzh/hJfdy0881FcAnouYskaz5tNIDEeMA=
github.com/galeone/tensorflow/tensorflow/go v0.0.0-20210519172502-4018d721b591 h1:1UOml7GsssubL3OW53W9+kBk5BQICiG95TNXAmTrrsM=
github.com/galeone/tensorflow/tensorflow/go v0.0.0-20210519172502-4018d721b591/go.mod h1:0LCzFWUL71lYeHtxlL/15k/+5ZKVzJk6Z+hLX1UBoUQ=
github.com/galeone/tfgo v0.0.0-20210519185601-7d7131a16882 h1:YrghpSKeSJYE24fn/NzuAVXukc9npefUV6j10sgnj8Y=
github.com/galeone/tfgo v0.0.0-20210519185601-7d7131a16882/go.mod h1:05ASagqJQa1Xev+FhblKviD9OAbRUWN4XAN2A1+aTd0=

and under github.com/galeone the following packages are installed:

Backward Compatibility

Hello galeone, thanks for your work. Below is some questions confusing me.

  1. Since you have updated to tf2.5, I wonder if this version compatible with tf2.4.1 or older tf version.

  2. What's the performance diff between tfgo and tensorflow go api?

  3. What will we get apart from convenience if we use tfgo rather than the official api?

Create error-returning variants of all calls

I understand the convenience of just turning errors into panics, it allows easier method chaining and can make the code look much cleaner.

But when integrating into existing codebases, the code is no longer idiomatic and it becomes a bit messy to handle errors without taking down a large system. Of course, it could be handled by wrapping all calls in recover, but that just makes the code very confusing and harder to maintain.

It would be nice if errors handling was a little easier.

  • One solution would be to just have an extra copy of methods that can return errors. Like LoadModelE, model.ExecE, etc.
  • For method chaining the error could be kept in the returned struct and checked using a .Check call.
    This will conflict with the current use of panics, so it should either be a new set of chainable methods or put in a new major version

E.g.

  m := tg.LoadModelP(....)  // panic on error
  m, err := tg.LoadModelE(....) // return error
  m := tg.LoadModel(...) // does not panic, but saves the error
  err := m.Check() // return above error

It's definitely not perfect, and the idea of having an error hanging around in the model struct isn't the most elegant solution

Slow inference

First, thanks so much for working on this library! I'm using it for serving a couple of models with good success.

I ran into something that surprised me today; it's not necessarily an issue with this library, but wanted to get your thoughts. I'm comparing execution of an SSD detector via a python script and a go executable. Just comparing the elapsed wall clock time with unix time command, the go executable is quite a bit faster -- around 4 seconds vs around 13 seconds for the python script. However, I wanted finer-grained timing data, so I measured just the model.Exec call in go vs the model.predict call in python, and I found that the go version is roughly twice as slow (3 sec vs 1.7 sec). My guess is that for a single run, parsing all the python code accounts for all of the extra time.

Both of these are using the same model, although the python version is defining the model in code and loading weights from an hdf5 file, while the go version is loading model + weights from the SavedModel file (.pb format) -- not sure if that would make any difference.

Do you have any ideas about why the graph execution would be slower under go, or how I could speed it up?

Thanks!

Potential memory leak on reloading model

Hey guys,

We use tfgo and notice an increase of memory usage each time our model gets reloaded. We have a running service which periodically checks whether the model got updated and reloads it. Now I wouldn't expect the memory usage to increase, since the model in memory should be replaced by the updated one.

The code to load the model is

// load model into memory
	model := tg.LoadModel(
		"path/to/our/model",
		[]string{
			"serve",
		},
		&tf.SessionOptions{},
	)

But our monitoring shows that the usage goes up every time the model gets reloaded (once per hour). I profiled the service with pprof and could not see that any of the internal components in our code has a significantly growing memory usage.

Furthermore I built tensorflow 2.9.1 with debug symbols and wrote a small go app just reloading the model. I did this to check for memory leaks with memleak-bpfcc from https://github.com/iovisor/bcc. This gave me the following stack trace, which, I believe, shows that there is memory leaked

	1770048 bytes in 9219 allocations from stack
		operator new(unsigned long)+0x19 [libstdc++.so.6.0.28]
		google::protobuf::internal::GenericTypeHandler<tensorflow::NodeDef>::New(google::protobuf::Arena*)+0x1c [libtensorflow_framework.so.2]
		google::protobuf::internal::GenericTypeHandler<tensorflow::NodeDef>::NewFromPrototype(tensorflow::NodeDef const*, google::protobuf::Arena*)+0x20 [libtensorflow_framework.so.2]
		google::protobuf::RepeatedPtrField<tensorflow::NodeDef>::TypeHandler::Type* google::protobuf::internal::RepeatedPtrFieldBase::Add<google::protobuf::RepeatedPtrField<tensorflow::NodeDef>::TypeHandler>(google::protobuf::RepeatedPtrField<tensorflow::NodeDef>::TypeHandler::Type*)+0xc2 [libtensorflow_framework.so.2]
		google::protobuf::RepeatedPtrField<tensorflow::NodeDef>::Add()+0x21 [libtensorflow_framework.so.2]
		tensorflow::FunctionDef::add_node_def()+0x20 [libtensorflow_framework.so.2]
		tensorflow::FunctionDef::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*)+0x334 [libtensorflow_framework.so.2]
		bool google::protobuf::internal::WireFormatLite::ReadMessage<tensorflow::FunctionDef>(google::protobuf::io::CodedInputStream*, tensorflow::FunctionDef*)+0x64 [libtensorflow_framework.so.2]
		tensorflow::FunctionDefLibrary::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*)+0x240 [libtensorflow_framework.so.2]
		bool google::protobuf::internal::WireFormatLite::ReadMessage<tensorflow::FunctionDefLibrary>(google::protobuf::io::CodedInputStream*, tensorflow::FunctionDefLibrary*)+0x64 [libtensorflow_framework.so.2]
		tensorflow::GraphDef::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*)+0x291 [libtensorflow_framework.so.2]
		bool google::protobuf::internal::WireFormatLite::ReadMessage<tensorflow::GraphDef>(google::protobuf::io::CodedInputStream*, tensorflow::GraphDef*)+0x64 [libtensorflow_framework.so.2]
		tensorflow::MetaGraphDef::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*)+0x325 [libtensorflow_framework.so.2]
		bool google::protobuf::internal::WireFormatLite::ReadMessage<tensorflow::MetaGraphDef>(google::protobuf::io::CodedInputStream*, tensorflow::MetaGraphDef*)+0x64 [libtensorflow_framework.so.2]
		tensorflow::SavedModel::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*)+0x25b [libtensorflow_framework.so.2]
		google::protobuf::MessageLite::MergeFromCodedStream(google::protobuf::io::CodedInputStream*)+0x32 [libtensorflow_framework.so.2]
		google::protobuf::MessageLite::ParseFromCodedStream(google::protobuf::io::CodedInputStream*)+0x3e [libtensorflow_framework.so.2]
		tensorflow::ReadBinaryProto(tensorflow::Env*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, google::protobuf::MessageLite*)+0x141 [libtensorflow_framework.so.2]
		tensorflow::(anonymous namespace)::ReadSavedModel(absl::lts_20211102::string_view, tensorflow::SavedModel*)+0x136 [libtensorflow_framework.so.2]
		tensorflow::ReadMetaGraphDefFromSavedModel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, tensorflow::MetaGraphDef*)+0x5d [libtensorflow_framework.so.2]
		tensorflow::LoadSavedModelInternal(tensorflow::SessionOptions const&, tensorflow::RunOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, tensorflow::SavedModelBundle*)+0x41 [libtensorflow_framework.so.2]
		tensorflow::LoadSavedModel(tensorflow::SessionOptions const&, tensorflow::RunOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, tensorflow::SavedModelBundle*)+0xc0 [libtensorflow_framework.so.2]
		TF_LoadSessionFromSavedModel+0x2a8 [libtensorflow.so]
		_cgo_6ae2e7a71f9a_Cfunc_TF_LoadSessionFromSavedModel+0x6e [testapp]
		runtime.asmcgocall.abi0+0x64 [testapp]
		github.com/galeone/tensorflow/tensorflow/go._Cfunc_TF_LoadSessionFromSavedModel.abi0+0x4d [testapp]
		github.com/galeone/tensorflow/tensorflow/go.LoadSavedModel.func2+0x14f [testapp]
		github.com/galeone/tensorflow/tensorflow/go.LoadSavedModel+0x2b6 [testapp]
		github.com/galeone/tfgo.LoadModel+0x6d [testapp]
		main.reloadModel+0x276 [testapp]
		main.main+0x72 [testapp]
		runtime.main+0x212 [testapp]
		runtime.goexit.abi0+0x1 [testapp]


As you can see this stacktrace shows calls to tfgo and to the underlying tensorflow library. I am not sure if I read it right, but it seems like there is a leak in tfgo or tensorflow itself.

Is there a way to explicitly release the memory of a loaded model when we reload? Could it be a problem in tfgo?
If you need more information on this, please tell me.

Thanks in advance :)

Possible to load model from memory?

tfgo.LoadModel() method require the path to the model file on disk.
Suppose I have a []byte that contain a model file content, just downloaded from internet. Is it possible to directly load model from this []byte?

SparseTensor support

as this python model example: How do i append the SparseTensor?
input as (indices,values,dense_shape)

Serving models with custom op using tfgo

Hi, I trained a model with custom op and export it using saved_model API and I would like to serve it using tfgo. However, since tfgo binds to tensorflow C library(more precisely libtensorflow.so) which is very hard to modify, I'm not sure how to register the custom op in the binary in this case. Is there any chance and would anyone give some help?
Here is an example of training script in which dynamic_embedding is implemented as a custom op built in the library tensorflow-recommenders-addons, and when I try to load the model using tfgo I get the following error:

panic: Op type not registered 'TFRA>CuckooHashTableOfTensors' in binary running on CXJK8129239. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

[Help needed]input must be 4-dimensional

My code as below,

		model := tg.LoadModel("/opt/models/mobilenet_v2_140_224", []string{"serve"}, nil)
		scope := tg.NewRoot()
		img := image.Read(scope, "/tmp/test.jpeg", 3)
		// img = img.Clone().ResizeArea(image.Size{Height: 224, Width: 224}).Center()
		input := tg.Exec(scope, []tf.Output{img.Value()}, nil, &tf.SessionOptions{})
		tensor, err := tf.NewTensor(input[0].Value())
		if err != nil {
			log.Error().Err(err).Msg("")
		}
		results := model.Exec(
			[]tf.Output{model.Op("StatefulPartitionedCall", 0)},
			map[tf.Output]*tf.Tensor{model.Op("serving_default_input", 0): tensor},
		)
		log.Debug().Interface("results", results[0].Value()).Msg("")

Result with:

panic: input must be 4-dimensional[573,860,3]
	 [[{{node predict/MobilenetV2/Conv/Relu6}}]]

Could some one help me to fixed this issue?

BTW: Same image file and model can be passed using tensorflow official go package.

Failed to go get tfgo

1. Steps to reproduce

1.1 install tensorflow c lib

[root@node01 ~]# curl -L "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-linux-x86_64-2.5.0.tar.gz" | sudo tar -C /usr/local -xz
sudo ldconfig

1.2 go get tfgo

[root@node01 ~]# GONOSUMDB="github.com/galeone/tensorflow,github.com/galeone/tfgo" go get -v github.com/galeone/tfgo
google.golang.org/protobuf/internal/flags
google.golang.org/protobuf/internal/set
google.golang.org/protobuf/internal/pragma
google.golang.org/protobuf/internal/detrand
google.golang.org/protobuf/internal/version
google.golang.org/protobuf/internal/errors
google.golang.org/protobuf/encoding/protowire

google.golang.org/protobuf/reflect/protoreflect
google.golang.org/protobuf/internal/descopts
google.golang.org/protobuf/internal/encoding/messageset
google.golang.org/protobuf/internal/descfmt
google.golang.org/protobuf/internal/order
google.golang.org/protobuf/internal/strs
google.golang.org/protobuf/runtime/protoiface
google.golang.org/protobuf/internal/genid
google.golang.org/protobuf/reflect/protoregistry
google.golang.org/protobuf/internal/encoding/text
google.golang.org/protobuf/proto
google.golang.org/protobuf/internal/encoding/defval
google.golang.org/protobuf/encoding/prototext
google.golang.org/protobuf/internal/filedesc
google.golang.org/protobuf/internal/encoding/tag
google.golang.org/protobuf/internal/impl
google.golang.org/protobuf/internal/filetype
google.golang.org/protobuf/runtime/protoimpl
github.com/galeone/tensorflow/tensorflow/go/core/framework/device_attributes_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/types_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/tensor_shape_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/tensor_slice_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/versions_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/variable_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/allocation_description_go_proto
github.com/galeone/tensorflow/tensorflow/go/stream_executor
google.golang.org/protobuf/types/known/anypb
google.golang.org/protobuf/types/known/durationpb
google.golang.org/protobuf/types/descriptorpb
github.com/galeone/tensorflow/tensorflow/go/core/framework/resource_handle_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/cost_graph_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/tensor_description_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/tensor_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/step_stats_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/attr_value_go_proto
google.golang.org/protobuf/reflect/protodesc
github.com/galeone/tensorflow/tensorflow/go/core/framework/node_def_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/op_def_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/function_go_proto
github.com/galeone/tensorflow/tensorflow/go/core/framework/graph_go_proto
github.com/golang/protobuf/proto
github.com/galeone/tensorflow/tensorflow/go/core/protobuf/for_core_protos_go_proto
github.com/galeone/tensorflow/tensorflow/go
# github.com/galeone/tensorflow/tensorflow/go
../pkg/mod/github.com/galeone/tensorflow/tensorflow/[email protected]/attrs.go:20:11: fatal error: tensorflow/c/c_api.h: No such file or directory
   20 | // #include "tensorflow/c/c_api.h"
      |           ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

2. Reason

I found out that in a previous commit f0fd7c0, the way the github.com/galeone/tensorflow package was imported was changed to the wrong way, in the mod.go file, This results in github.com/galeone/tensorflow/tensorflow/go which can be downloaded but not compiled because it depends on github.com/galeone/tensorflow/tensorflow/c.

So I used go get to download the previous version and it was successful.

[root@node01 ~]# GONOSUMDB="github.com/galeone/tensorflow,github.com/galeone/tfgo" go get -v github.com/galeone/tfgo@2ad800d7d3c5e2dbc09037efecbfdbd2dc9e88ab
go get: added github.com/galeone/tfgo v0.0.0-20210519173640-2ad800d7d3c5

3. My env

OS Version: WSL Ubuntu-20.04

[root@node01 ~]# go version
go version go1.16.4 linux/amd64
[root@node01 ~]# go env
GO111MODULE="on"
GOARCH="amd64"
GOBIN="/home/sycki/golang/bin"
GOCACHE="/home/sycki/.cache/go-build"
GOENV="/home/sycki/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GOMODCACHE="/home/sycki/golang/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/sycki/golang"
GOPRIVATE=""
GOPROXY="https://proxy.golang.com.cn,direct"
GOROOT="/home/sycki/program/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/home/sycki/program/go/pkg/tool/linux_amd64"
GOVCS=""
GOVERSION="go1.16.4"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/sycki/golang/phoebe/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build4110711487=/tmp/go-build -gno-record-gcc-switches"

So should go.mod be changed to the old way? Sorry I don't have enough time to modify and test it myself.

Error "Could not satisfy explicit device specification"

When I'm trying to import a trained tf model in tfgo, I sometimes encounter the following error. Is there any restriction between the platform of training and testing?

2019-03-05 02:43:13.272111: I tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: infer
2019-03-05 02:43:13.273853: I tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { tag }
2019-03-05 02:43:13.275663: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-03-05 02:43:13.282021: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
2019-03-05 02:43:13.282044: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:150] kernel driver does not appear to be running on this host (shining-Inspiron-5680): /proc/driver/nvidia/version does not exist
2019-03-05 02:43:13.296950: I tensorflow/cc/saved_model/loader.cc:259] SavedModel load for tags { tag }; Status: fail. Took 24847 microseconds.
panic: Cannot assign a device for operation embeddings/encoder/embedding_encoder/Initializer/random_uniform/RandomUniform: Could not satisfy explicit device specification '' because the node {{colocation_node embeddings/encoder/embedding_encoder/Initializer/random_uniform/RandomUniform}} was colocated with a group of nodes that required incompatible device '/device:GPU:0'
Colocation Debug Info:
Colocation group had the following types and devices:
Assign: CPU
Identity: CPU XLA_CPU XLA_GPU
VariableV2: CPU
Mul: CPU XLA_CPU XLA_GPU
Add: CPU XLA_CPU XLA_GPU
Sub: CPU XLA_CPU XLA_GPU
GatherV2: CPU XLA_CPU XLA_GPU
RandomUniform: CPU XLA_CPU XLA_GPU
Const: CPU XLA_CPU XLA_GPU

Colocation members and user-requested devices:
embeddings/encoder/embedding_encoder/Initializer/random_uniform/shape (Const)
embeddings/encoder/embedding_encoder/Initializer/random_uniform/min (Const)
embeddings/encoder/embedding_encoder/Initializer/random_uniform/max (Const)
embeddings/encoder/embedding_encoder/Initializer/random_uniform/RandomUniform (RandomUniform)
embeddings/encoder/embedding_encoder/Initializer/random_uniform/sub (Sub)
embeddings/encoder/embedding_encoder/Initializer/random_uniform/mul (Mul)
embeddings/encoder/embedding_encoder/Initializer/random_uniform (Add)
embeddings/encoder/embedding_encoder (VariableV2) /device:GPU:0
embeddings/encoder/embedding_encoder/Assign (Assign) /device:GPU:0
embeddings/encoder/embedding_encoder/read (Identity) /device:GPU:0
dynamic_seq2seq/encoder/embedding_lookup/axis (Const) /device:GPU:0
dynamic_seq2seq/encoder/embedding_lookup (GatherV2) /device:GPU:0
save/Assign_3 (Assign) /device:GPU:0
save_1/Assign_3 (Assign) /device:GPU:0

 [[{{node embeddings/encoder/embedding_encoder/Initializer/random_uniform/RandomUniform}} = RandomUniform[T=DT_INT32, _class=["loc:@embeddings/encoder/embedding_encoder"], _output_shapes=[[7709,128]], dtype=DT_FLOAT, seed=0, seed2=0](embeddings/encoder/embedding_encoder/Initializer/random_uniform/shape)]]

tensor example support

It seems that it is not possible to use "github.com/galeone/tfgo/proto/example" from tfgo v2.9.
Are there any available ways to use example.Features and example.Example in tfgo v2.9?
Or could you let me know another way to feed saved models without them?

Invalid argument: shape must be a vector of {int32,int64}, got shape []

I try to use the interface of RandomUniform as follows:

func TestRandomUniform(t *testing.T) {
	root := op.NewScope()

	T := op.Placeholder(root.SubScope("input"), tf.Int64)

	seed := op.RandomUniformSeed(-1.0)
	seed2 := op.RandomUniformSeed2(1.0)
	product := op.RandomUniform(root,T,tf.Float,seed,seed2)


	graph, err := root.Finalize()
	if err != nil {
		panic(err.Error())
	}

	var sess *tf.Session
	sess, err = tf.NewSession(graph, &tf.SessionOptions{})
	if err != nil {
		panic(err.Error())
	}

	var A *tf.Tensor

	if A, err = tf.NewTensor(int64(1)); err != nil {
		panic(err.Error())
	}

	var results []*tf.Tensor
	if results, err = sess.Run(
		map[tf.Output]*tf.Tensor{
			T: A,
		},
		[]tf.Output{product}, nil); err != nil {
			panic(err.Error())
	}
	for _, result := range results {
		fmt.Println(result.Value().([]int64))
	}

}
Invalid argument: shape must be a vector of {int32,int64}, got shape []
panic: shape must be a vector of {int32,int64}, got shape []
	 [[Node: RandomUniform = RandomUniform[T=DT_INT64, _class=[], dtype=DT_FLOAT, seed=-1, seed2=1, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_input/Placeholder_0_0)]] [recovered]
	panic: shape must be a vector of {int32,int64}, got shape []
	 [[Node: RandomUniform = RandomUniform[T=DT_INT64, _class=[], dtype=DT_FLOAT, seed=-1, seed2=1, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_input/Placeholder_0_0)]]

goroutine 5 [running]

But it prints the error for me. What's it means?I am not good at c++.
And i try to use your tfgo.I get the same result.

loading image, reshaping and feeding to model input

Hey,

I am using python3 + keras in my projects and want to learn go.
What is the right way to prepare the image for inference using tfgo?

Example steps

  1. load the rgb image: shape (59, 199, 3)
  2. convert to grey: shape (59, 199, 1)
  3. resize to (32, 100, 1)
  4. reshape to (100, 32, 1)
  5. convert to float

then feed the image to

results := model.Exec([]tf.Output{
     model.Op("the_output", 0),
 }, map[tf.Output]*tf.Tensor{
     model.Op("the_input", 0): image_jpeg,
 })

model.Exec: panic: Cannot parse tensor from proto: dtype: DT_VARIANT

Here's the example I'm trying:
https://gist.github.com/9nut/f95bb4cbe9c223e9f73a9e06429f71ac

I get a panic, an error message similar to this tensorflow issue that @galeone also commented on:
tensorflow/tensorflow#44428

Here's the output from my attempt:

2021-03-26 14:00:39.257859: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: ./centernet_hourglass_512x512_kpts_1
2021-03-26 14:00:39.657372: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
2021-03-26 14:00:39.657419: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: ./centernet_hourglass_512x512_kpts_1
2021-03-26 14:00:39.657502: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-03-26 14:00:41.293101: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
2021-03-26 14:00:46.853281: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: ./centernet_hourglass_512x512_kpts_1
2021-03-26 14:00:48.256126: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 8998267 microseconds.
2021-03-26 14:00:53.231174: W tensorflow/core/grappler/optimizers/loop_optimizer.cc:906] Skipping loop optimization for Merge node with control input: StatefulPartitionedCall/StatefulPartitionedCall/cond/then/_1776/cond/Assert_3/AssertGuard/branch_executed/_1989
2021-03-26 14:00:54.407937: E tensorflow/core/framework/tensor.cc:555] Could not decode variant with type_name: "tensorflow::TensorList". Perhaps you forgot to register a decoder via REGISTER_UNARY_VARIANT_DECODE_FUNCTION?
2021-03-26 14:00:54.407991: W tensorflow/core/framework/op_kernel.cc:1740] OP_REQUIRES failed at constant_op.cc:79 : Invalid argument: Cannot parse tensor from tensor_proto.
2021-03-26 14:00:54.442763: E tensorflow/core/framework/tensor.cc:555] Could not decode variant with type_name: "tensorflow::TensorList". Perhaps you forgot to register a decoder via REGISTER_UNARY_VARIANT_DECODE_FUNCTION?
2021-03-26 14:00:54.443281: W tensorflow/core/framework/op_kernel.cc:1740] OP_REQUIRES failed at constant_op.cc:79 : Invalid argument: Cannot parse tensor from proto: dtype: DT_VARIANT
tensor_shape {
}
variant_val {
type_name: "tensorflow::TensorList"
metadata: "\000\001\377\377\377\377\377\377\377\377\377\001\022\002\010\004"
tensors {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 4
}
}
float_val: 0
float_val: 0
float_val: 1
float_val: 1
}
}

panic: Cannot parse tensor from proto: dtype: DT_VARIANT
tensor_shape {
}
variant_val {
type_name: "tensorflow::TensorList"
metadata: "\000\001\377\377\377\377\377\377\377\377\377\001\022\002\010\004"
tensors {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 4
}
}
float_val: 0
float_val: 0
float_val: 1
float_val: 1
}
}

     [[{{node StatefulPartitionedCall/StatefulPartitionedCall/map/TensorArrayUnstack_1/TensorListFromTensor/_0__cf__1}}]]

goroutine 1 [running]:
github.com/galeone/tfgo.(*Model).Exec(0xc00012c030, 0xc003f2fe20, 0x1, 0x1, 0xc003f2fe68, 0x0, 0xc013463ea8, 0x1)
$GOPATH/GoPkgs/pkg/mod/github.com/galeone/[email protected]/model.go:73 +0xd7
main.inferCenterNet()
$HOME/src/savedmodelsrv/runinfer.go:80 +0x4c5
main.main()
$HOME/src/savedmodelsrv/main.go:60 +0x25
$

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.