Git Product home page Git Product logo

fido's People

Contributors

benjaminfspector avatar dakerfp avatar hmwildermuth avatar joshuagruenstein avatar patrickelectric avatar truell20 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fido's Issues

how to compile example

hi,
I want to compile the ReinforcementLearning.cpp. I am using ver 0.02 with ubuntu 16.04.

I use g++/gcc and I got error of " undefined reference to.."
can you write some example how to compile it?
thanks!

Use builder pattern

Trying to read your example code without comments or the documentation handy in a browser window is a pretty puzzling experience. Lots of numeric values without a hint as to what that value might be. Many developers who do not comment their code all that well will leave an unreadable mess when using Fido.

For instance, give this to someone who might have heard about the basics of machine learning but can't remember all the details (let alone the order of parameters in the Fido API):

net::NeuralNet neuralNetwork = net::NeuralNet(1, 1, 2, 4, "sigmoid");
net::Backpropagation backprop = net::Backpropagation(0.1, 0.001, 0.001, 10000);

Nobody knows what this means without looking up the docs. Now imagine if the code looked like this:

net::NeuralNet neuralNetwork = net::NeuralNet::Builder()
    .numInputs(1).numOutputs(1)
    .numHiddenLayers(2).numNeuronsPerHiddenLayer(4)
    .activationFunction("sigmoid")
    .build();

net::Backpropagation backprop = net::Backpropagation::Builder()
    .learningRate(.0.1)
    .momentumTerm(0.001)
    .acceptableErrorLevel(0.001)
    .maxTrainingIterations(10000))
    .build();

Not only does the coder get autocompletion and can initialize the object without double-checking the docs, but the reader now also has a clue what's going on. It's not even much longer than the current version with comment, but now everyone will write readable code and not just those that take the time to write a helpful comment.

This is called the builder pattern, Wikipedia has an article on it too. I think it would make a great match for many of the classes in your API.

Genetic Algorithim doesnt work?

So im looking at usage of the Genetic Algo, and ive gone through all of the source code, and i cant seem to find where the network gets the output, which makes no sense since it would need to get the output to get the best network. is this something we have to input ourselves or am i just high?

Example compilation on OS X

How can we compile one of the examples?

For instance, I am trying
gcc Backpropagation.cpp -o backpropagation -std=c++11 /usr/local/lib/fido.a

but it is not working. I get the error:

Undefined symbols for architecture x86_64: "std::__1::__vector_base_common<true>::__throw_length_error() const", referenced from: std::__1::vector<double, std::__1::allocator<double> >::__vallocate(unsigned long) in Backpropagation-9a2547.o std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >::__vallocate(unsigned long) in Backpropagation-9a2547.o std::__1::vector<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::allocator<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > > >::__vallocate(unsigned long) in fido.a(SGDTrainer.o) std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >::__vallocate(unsigned long) in fido.a(SGDTrainer.o) std::__1::vector<double, std::__1::allocator<double> >::__vallocate(unsigned long) in fido.a(SGDTrainer.o) std::__1::vector<std::__1::vector<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::allocator<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > > >, std::__1::allocator<std::__1::vector<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::allocator<std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > > > > >::__recommend(unsigned long) const in fido.a(SGDTrainer.o) std::__1::vector<net::Neuron, std::__1::allocator<net::Neuron> >::__recommend(unsigned long) const in fido.a(NeuralNet.o) ... "std::__1::locale::use_facet(std::__1::locale::id&) const", referenced from: std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(SGDTrainer.o) std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(Backpropagation.o) std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(NeuralNet.o) std::__1::ctype<char> const& std::__1::use_facet<std::__1::ctype<char> >(std::__1::locale const&) in fido.a(Layer.o) ...

Pruning test fails

To quote Travis:

pruning.cpp:40: FAILED:
REQUIRE( error1 < error2 )
with expansion:
1155.8756528195 < 1136.796447816

Standardization of Storage

Because of ISEF, I don't really have time to do this right now. However, serialization for Learner, Trainer, and Interpolator subclasses should follow the method outlined here. Right now, unserializing a subclass of one of these three without knowing the subclass is impossible.

  • Learner
  • Trainer
  • Interpolator (Already Done)

Easy Switching of Drivers

Simlink should be a subclass of a "Driver" superclass. Hardware drivers should subclass this, making it easy to switch models between the simulator and hardware.

Save network training state.

I have seen from the tests that it is possible to save the network, but is it poossible to save its weights efter it has been trained? I.e I would like to train the net and then store it to flash, so that when I boot up my embedded project the next time it is already trained. Is this possible?

Great project btw!

No throws for impossible neural network conditions

If I ran code that used the neuralNet constructor with negative numbers, it messes. Also, the activation function string can be anything, but should throw an error if not part of the map.

Maybe throw an error or print an error and terminate the function if a condition is broken

Poor error handling

Errors are handled by throwing 1 and printing the message with std::cout. This should instead use std::runtime_exception.

Simulator's loading of resources

==9371== Command: ./tests.o
==9371== 
==9371== Invalid read of size 4
==9371==    at 0x507CBB4: sf::priv::GlxContext::GlxContext(sf::priv::GlxContext*) (in /usr/lib/libsfml-window.so.2.1)
==9371==    by 0x5076B9C: sf::priv::GlContext::globalInit() (in /usr/lib/libsfml-window.so.2.1)
==9371==    by 0x5077482: sf::GlResource::GlResource() (in /usr/lib/libsfml-window.so.2.1)
==9371==    by 0x5079A55: sf::Window::Window() (in /usr/lib/libsfml-window.so.2.1)
==9371==    by 0x4E5C3C5: sf::RenderWindow::RenderWindow() (in /usr/lib/libsfml-graphics.so.2.1)
==9371==    by 0x467A30: Simlink::Simlink() (in /home/travis/build/FidoProject/Fido/tests/tests.o)
==9371==    by 0x45A872: ____C_A_T_C_H____T_E_S_T____8() (in /home/travis/build/FidoProject/Fido/tests/tests.o)
==9371==    by 0x4271CD: Catch::FreeFunctionTestCase::invoke() const (in /home/travis/build/FidoProject/Fido/tests/tests.o)
==9371==    by 0x4132F6: Catch::TestCase::invoke() const (in /home/travis/build/FidoProject/Fido/tests/tests.o)
==9371==    by 0x4261DA: Catch::RunContext::invokeActiveTestCase() (in /home/travis/build/FidoProject/Fido/tests/tests.o)
==9371==    by 0x425E69: Catch::RunContext::runCurrentTest(std::string&, std::string&) (in /home/travis/build/FidoProject/Fido/tests/tests.o)
==9371==    by 0x424810: Catch::RunContext::runTest(Catch::TestCase const&) (in /home/travis/build/FidoProject/Fido/tests/tests.o)
==9371==  Address 0xe0 is not stack'd, malloc'd or (recently) free'd
==9371== 

OCR Memory Usage

I was trying to make an OCR neural network using the MNIST OCR image library, however my process was killed every time I ran it by a kernel process called the OOM Killer. It kills processes which use too much memory. I am not sure whether this is because of my code, or something about the backpropagation code. Either way, any help would be appreciated.

also, just to note, when I run the program with the learning sample size cut down to only 250 images, it works, but above 500 it fails.

The C++ File:

#include "ocr.h"

int main(int argc, char const *argv[]) {
    std::string lbels = "train-labels.idx1-ubyte";
    std::string imges = "train-images.idx3-ubyte";
    std::string outputFilename = (argc > 1) ? argv[1] : "ocr.txt";

    int mgicNum;
    int sizeNum;

    std::cout << "Loading images from files..." << std::endl;

    auto inputArr = read_mnist_images(imges, mgicNum, sizeNum);
    auto outputArr = read_mnist_labels(lbels, mgicNum);

    net::NeuralNet neuralNetwork = net::NeuralNet(sizeNum, 10, 1, sizeNum, "sigmoid");

    std::vector< std::vector<double> > input;
    std::vector< std::vector<double> > correctOutput;

    std::cout << "Loading into vector...\n";
    for (size_t i = 0; i < mgicNum; i++) {
        std::vector<double> imgeArr;
        for (size_t j = 0; j < sizeNum; j++) {
            imgeArr.push_back(double(inputArr[i][j])/double(255));
        }
        //std::cout << imgeArr.size() << "; " << sizeNum << "\n";
        input.push_back(imgeArr);
        correctOutput.push_back(digits(outputArr[i]));
    }

    std::cout << "Done with loading.\n";

    std::cout << "Freeing memory..." << std::endl;

    delete [] inputArr; // <- Is this how you use delete? idk
    delete [] outputArr;

    // free(inputArr);
    // free(outputArr);

    std::cout << "Done with freeing memory." << std::endl;

    std::cout << "Supposed # of samples: " << mgicNum << std::endl;
    std::cout << "Actual # of samples: " << input.size() << std::endl;
    net::Backpropagation backprop = net::Backpropagation(0.01, 0.9, 0.1, 10);
    std::cout << "Inputs: " << neuralNetwork.numberOfInputs() << std::endl;
    std::cout << "Hidden: " << neuralNetwork.numberOfHiddenNeurons() << std::endl;
    std::cout << "Outputs: " << neuralNetwork.numberOfOutputs() << std::endl;

    std::cout << "Input array: " << input[0].size() << std::endl;
    std::cout << "Correct array: " << correctOutput[0].size() << std::endl;

    if (input.size() != correctOutput.size()) {
        throw std::runtime_error("Differing sizes between two of the same thing");
    }

    /* To decrease memory usage

    #define RESIZE_Value 500

    // Works at 100, 250
    // Killed at 500 and above

    std::cout << "Resizing arrays to " << RESIZE_Value << " each..." << std::endl;

    input.resize(RESIZE_Value);
    correctOutput.resize(RESIZE_Value);

    // */

    std::cout << "Beginning training..." << std::endl;

    backprop.train(&neuralNetwork, input, correctOutput);

    std::cout << "Done training. Storing..." << std::endl;

    std::ofstream myfile;
    myfile.open(outputFilename);
    neuralNetwork.store(&myfile);
    myfile.close();

    std::cout << "Done storing to output file '" << outputFilename << "'. Testing..." << std::endl;

    #define TEST_INDEX 23 // Random test index

    std::cout << "Test: " << findTop(neuralNetwork.getOutput(input[TEST_INDEX])) << std::endl;
    std::cout << "Correct answer: " << findTop(correctOutput[TEST_INDEX]) << std::endl;

    return 0;
}

The header file which contains functions for loading test images and picking highest members of arrays:
(The MNIST functions I copied from somewhere else)

#include "include/Fido.h"
#ifndef OCR
#define OCR

typedef unsigned char uchar;

uchar** read_mnist_images(std::string full_path, int& number_of_images, int& image_size) {
    auto reverseInt = [](int i) {
        unsigned char c1, c2, c3, c4;
        c1 = i & 255, c2 = (i >> 8) & 255, c3 = (i >> 16) & 255, c4 = (i >> 24) & 255;
        return ((int)c1 << 24) + ((int)c2 << 16) + ((int)c3 << 8) + c4;
    };

    std::ifstream file(full_path);

    if(file.is_open()) {
        int magic_number = 0, n_rows = 0, n_cols = 0;

        file.read((char *)&magic_number, sizeof(magic_number));
        magic_number = reverseInt(magic_number);

        if(magic_number != 2051) throw std::runtime_error("Invalid MNIST image file!");

        file.read((char *)&number_of_images, sizeof(number_of_images)), number_of_images = reverseInt(number_of_images);
        file.read((char *)&n_rows, sizeof(n_rows)), n_rows = reverseInt(n_rows);
        file.read((char *)&n_cols, sizeof(n_cols)), n_cols = reverseInt(n_cols);

        image_size = n_rows * n_cols;

        uchar** _dataset = new uchar*[number_of_images];
        for(int i = 0; i < number_of_images; i++) {
            _dataset[i] = new uchar[image_size];
            file.read((char *)_dataset[i], image_size);
        }
        return _dataset;
    } else {
        throw std::runtime_error("Cannot open file `" + full_path + "`!");
    }
}

uchar* read_mnist_labels(std::string full_path, int& number_of_labels) {
    auto reverseInt = [](int i) {
        unsigned char c1, c2, c3, c4;
        c1 = i & 255, c2 = (i >> 8) & 255, c3 = (i >> 16) & 255, c4 = (i >> 24) & 255;
        return ((int)c1 << 24) + ((int)c2 << 16) + ((int)c3 << 8) + c4;
    };

    typedef unsigned char uchar;

    std::ifstream file(full_path);

    if(file.is_open()) {
        int magic_number = 0;
        file.read((char *)&magic_number, sizeof(magic_number));
        magic_number = reverseInt(magic_number);

        if(magic_number != 2049) throw std::runtime_error("Invalid MNIST label file!");

        file.read((char *)&number_of_labels, sizeof(number_of_labels)), number_of_labels = reverseInt(number_of_labels);

        uchar* _dataset = new uchar[number_of_labels];
        for(int i = 0; i < number_of_labels; i++) {
            file.read((char*)&_dataset[i], 1);
        }
        return _dataset;
    } else {
        throw std::runtime_error("Unable to open file `" + full_path + "`!");
    }
}

std::vector<double> digits(uchar j) {
    std::vector<double> v;
    for (size_t i = 0; i < 10; i++) {
        if (j == i) {
            v.push_back(1);
        } else {
            v.push_back(0);
        }
    }
    return v;
}

int findTop(std::vector<double> v) {
    int best = -1;
    double top = -1.0;
    for (size_t i = 0; i < 10; i++) {
        if (v[i] > top) {
            best = i;
            top = v[i];
        }
    }
    return best;
}

#endif

Hyperparameter Optimization

It would be great if we could implement

  • Pruning of Neural Network architecture (PrunableLayer could be a subclass of Layer)
  • Optimization of exploration level in the Fido control system
  • Learning constants could be optimized by implementing some of the trainers described in #26

Mac Installation

There are currently simulator compiler errors on mac computers. In addition, files are install to /usr/local, and so, users of xcode cannot access files without linking that directory.

Tasks should use any learning

Tasks should be able to use any learning model. Once Learner abstract class is implemented, Tasks should use that instead of taking in a specific model.

Linguist is wrong

(Github's language distribution lib) recognizes catch.h (unit testing library). Makes it look like we wrote this in objective-c

More Unit Tests

Possible tests

  • Storage and loading of all objects
  • Backpropagation training
  • Interpolator accuracy
  • QLearn convergence
  • WireFitQLearn convergence

API dealing with TensorFLow's models ?

Hi guys,
I am trying to apply TensorFlow trained models to embedded systems, say, Raspberry Pi, or even much less power boards with no OS.

So, have you ever thought to develop something can use models of TensofFlow on very tiny devices?

Cheers,
Dorje

Robot superclass

We should have an abstract Robot superclass that would perform tasks when given a learning model and a Task. WireFitRobot and Robot (currently used with q-learning) should be eliminated.

Doxygen Documentation

We need to switch to the doxygen comment style, so that the generated html documentation will include our descriptions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.