Git Product home page Git Product logo

ann-for-mnist-task's Introduction

ANN for MNIST Task

A basic Artificial Neuronal Network from scratch to deal with MNIST task (Recognition of Handwritten Digits). This artificial neruonal network only contains two layers, the input layer with 784 nodes wich is fully connected with the output layer with 10 nodes, without any activation function. By default the ANN was only trained with 19,000 images instead of the 60,000 images in total. Further if you want to improve the accuracy of the predictions you are welcome to train the model.

Before you start

In order to make some cool predictions you first must download, decompress, and storage the dataset in the right location. To make this more quickly you just have to run the command at below in your bash prompt.

$ ./setup.sh

Make your first prediction

python main.py --option test

Train the ANN

python main.py --option train

Artificial Neuronal Network (Own Notes)

What is an ANN?

    An Artificial Neural Network is computational implementation about 
    how the brain itself process the information. Where this computational
    implementation works in base of input(s)
    to have some output(s) as a result. Generally the goal 
    is to yield predictons that fits 
    the distribution of acceptable answers.

What are the main components of an ANN?

    The main components of an ANN with their respective purpose are the next ones:
        Node: Depending on the level of abstraction, the simplest node storage
            the result of the weighted sum that was generated by the predecessor nodes with their respective weights.
        Input Layer: It is a set of node(s) that has the main purpose to receive the
            input numeric data.
        Output Layer: It is a set of node(s) that has the main purpose to yield a numerical result,
            which is the prediction, following the forward propagation.
        Weight: It is a numerical variable that is in charge to be multiplied 
            with the data that pass trough to the node. With the main purpose to be changed. 
            If the prediction was not accurated we can manipulate the set of ANN's weights
            to have predictions which fit aceptable results.

How does it learn?

    All the ANN learns by reducing the error. 
    The error is generated by the inaccuracy of the prediction in contrast
    with the real result. In order to make right predictions, we are able to adjust
    some variables inside of this network. This variables are called weights.
    In this case we are supervising the predictions that this ANN is yielding,
    and telling to the network how amount of error have predicted for each output.

How to make a prediction?

    Making a prediction is basically take some numerical input(s)
    and follow the weighted sum of each layer's node until we reach the output layer.
    It is also called forward propagation because you are propaging the data 
    forward trough the network making some transormation 
    to this input data, due to the weights and the activation functions. 
    In order to obtain a suitable prediction.

How to train the ANN?

    In order make the ANN more accurate
    we need to reduce the error that is producing.
    
    Firstly, we have to measure the error. 
    Nowadays exist plenty of ways to calculate this prediction's error.
    The basic idea is:
        error = prediction - real_result = delta
        
    Another ideas:
        error_abs = abs(prediction - real_result) = abs(error)
        error_squared = (prediction - real_result)^2 = error^2
     
    Secondly, we have to be aware about the releationship of the error and the weight, because obvisly if we change the weight(s) we
    are going to obtain a different error value. Thus the question is, each weight, where it needs to be 'move' 
    in order to have the less error as posible. In this case we only have to multiply delta which is the error,
    with the derivate of the input*weight product with respect to weight. In this case this derivate is only the input itself. Thus
    we calculate the weight_delta = delta * input, whihc is basically how much the weight caused an inaccurate result.
    And therefore we are able to correct the weight

License

MIT

ann-for-mnist-task's People

Contributors

menesesghz avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.