Git Product home page Git Product logo

infoneat's Introduction

InfoNEAT is a member of the NEAT (NeuroEvolution of Augmenting Topologies) family. InfoNEAT evolves neural networks and knows when to stop learning and how to scale up as a multi-class algorithm with many (e.g., 256) classes. In a nutshell, two issues with the NEAT algorithm are resolved: application in multi-class scenarios and generic criteria for stopping the training/evolution processes.

This repo presents the InfoNEAT framework used to train NN models that perform SCA. The results will appear in IACR Transactions on Cryptographic Hardware and Embedded Systems (TCHES) (TCHES 2023, Issue 1). It is also available on arxiv.

Python packages to install: scikit-learn, h5py, pickle, csv

Instructions:

    1. Extract the files from the folder 'InfoNEAT'.

    2. The folder contains a config folder that contains configuration parameters for the NEAT-based algorithm for each dataset. The src folder contains all the source code to train sub-models, stacked models, and even split the dataset into different cross-validation folds.

    3. The file 'train.py' contains the code to train sub-models and stacked model for each dataset. Change lines 9-14 accordingly. If needed, set dataset_split = True (line 15) to create cross-folds of the dataset. Please save the .h5 dataset files for each of the datasets under the respective folders.

Result (after running the file 'train.py'):

    1. If dataset_split is set to True, then k-folds of the dataset will be created and saved under the datasets folder. (Currently num_of_folds is set to 5)

    2. 256 sub-models and a stacked model for a specific fold will be created and saved under the models folder (currently, fold_num is set to 1).

Note: the models/submodels folder contains all the submodels against the three datasets. The models/stacked_models folder contains stacked models for one fold for AES_HD and ASCAD_fixed key datasets. This folder also contains the stacked models for the ASCAD_variable key dataset (no cross-validation was performed for the ASCAD_variable key dataset).


###################################################
Note: modifications to the original neat code can be found in the folder 'neat'. Especially the code to train a submodel using batch sizes of data, training submodels for multiple classes or labels, and the inclusion of CMI criteria to train effective submodel)

infoneat's People

Contributors

rachary00 avatar wpisabaganji avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.