Git Product home page Git Product logo

integerneuralnetwork's Introduction

Integer Neural Net

The integer neural network is identical to standard, floating-point neural networks in form and function. The primary difference is that all operations are performed on integers rather than floating point numbers. This is done to reduce computational complexity and make the network easier to implement in hardware (e.g. as a dedicated co-processor of some kind).

Because networks are trained using probabilities, many of the underlying mathematical functions necessary for training the neural network are not applicable when using integers. This means that integer networks are typically not trained. Rather, training is performed on a standard network and then the results are converted to an integer network according to some defined bit-depth. Likewise, the activation function is usually reliant on floating point operations, so in the integer network's case, it is saved to a file and read into memory as an array. This allows the integer neural network much faster access to the activation values.

The provided code is the core necessary to run an integer neural network. Training must be done on a floating-point neural network for new data sets; the neural network in sepol/bp-neural-net is well-suited to this purpose. In main.cpp, the neural network runner is nearly identical to that used in a standard network. The main modification to the network is the declaration of the integer bit-depth. The max neuron value specifies the activation function's accuracy while the max weight specifies the maximum accuracy of converted weights. Greater bit-depth allows for finer resolution, and hence, more accuracy (e.g. 16), while a lower number saves space on the activation table (e.g. 8). For the sample included, 12 bits provides a decent depth for both neuron and weight values, and the accuracy lost is only a few percentage points compared to the original floating-point network.

The main file includes more notes on using the network, and it performs all of the necessary conversion operations needed to take the floating-point values and make them compatible with the integer network. The sample data is the same used in bp-neural-net. The saved values in weights.txt are derived from running the sample program in bp-neural-net as well.

integerneuralnetwork's People

Contributors

miguelpinho avatar spolsley-tamu avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.