Named after Matrix Chain (Matrix Multiplication in series), after all thats what Neural Nets are.
This project is a fun implementation of Neural Network to understand the under-the-hood working.
- Optim class -- Optimizer
- Model class -- Neural Network model
- MLOPS -- Operators required for Deep learning, like Sigmoid, ReLU etc
Here are the steps to follow:
- Write a Model
- define Loss function
- Create an Optimizer such as SGD or Adam
- Load the data to tensor in a Data Loader
- write a Training Loop
- train it
- Train MNIST subset- for eg, 3 vs 7
- Train full MNIST
- Utilities-- Saving Loading pickel file
- Expand
Libraries used-
- Numpy
- Pytorch's Tensor, only for calculating Gradient
- tqdm
A much later goal in this project-
- remove pytorch dependency (gradient)
- Create tensor module
Here is a demo training mnist dataset
import maxine.nn as nn
x = torch.randn(60000, 1, 28, 28)
x = x.view(x.size(0), -1)
ih = nn.Linear(784, 10)
ih.forward(x)
TODO
from maxine.metrics import accuracy
a = torch.randn(3,5).normal_(0,1)
b = torch.Tensor([2, 1, 5])
accuracy(a, b)
# output:
# tensor(0.3333)
pip install -r requirements.txt
pip install -r requirements-dev.txt
Install the project:
pip install -e .