Git Product home page Git Product logo

lstm_peptides's Introduction

LSTM_peptides

Introduction

This repository contains scripts for training a generative long short-term memory recurrent neural network on peptide sequences. The user can provide sets of amino acid sequences to train the model, and finally invoke sampling of sequences that should be similar to the training data. As such, artificial intelligence is put in charge of de novo design of new peptide sequences. The code in this repository relies on the keras package by Chollet and others (https://github.com/fchollet/keras) with tensorflow backend (http://tensorflow.org) as well as on sklearn (http://scikit-learn.org) and modlamp (https://modlamp.org).

Content

  • README.md: this file
  • LSTM_peptides.py: contains the main code in the following two classes:
    • SequenceHandler: class that is used for reading amino acid sequences and translating them into a one-hot vector encoding.
    • Model: class that generates and trains the model, can perform cross-validation and plot training and validation loss.
  • requirements.txt: requirements / package dependencies
  • LICENSE: MIT opensource license

How To Install And Use

Clone the directory to your computer by:

git clone https://github.com/alexarnimueller/LSTM_peptides

Then, install all requirements (in requirements.txt). In this folder, type:

pip install -r requirements.txt

Finally run the model as follows (with your own parameters provided, see the list below):

python LSTM_peptides.py --dataset $TRAINING_DATA_FILE --name $YOUR_RUN_NAME $FURTHER_OPTIONAL_PARAMETERS

Parameters:

  • dataset (default=training_sequences_noC.csv)
    • file containing training data with one sequence per line
  • name (default=test)
    • run name for all generated data; a new directory will be created with this name
  • batch_size (OPTIONAL, default=128)
    • Batch size to use by the model.
  • epochs (OPTIONAL, default=50)
    • Number of epochs to train the model.
  • layers (OPTIONAL, default=2)
    • Number of LSTM layers in the model.
  • neurons (OPTIONAL, default=256)
    • Number of units per LSTM layer.
  • cell (OPTIONAL, default=LSTM)
    • type of neuron to use, available: LSTM, GRU
  • dropout (OPTIONAL, default=0.1)
    • Fraction of dropout to apply to the network. This scales with depth, so layer1 gets 1*dropout, Layer2 2*dropout etc.
  • mode (OPTIONAL, choices=[pretrain. finetune, sample], default=pretrain)
    • Whether to pre-train (pretrain), fine-tune (finetune) or just sample from a pre-trained model (sample).
  • valsplit (OPTIONAL, default=0.2)
    • Fraction of the data to use for model validation. If 0, no validation is performed.
  • sample (OPTIONAL, default=100)
    • Number of sequences to sample from the model after training.
  • temp (OPTIONAL, default=1.25)
    • Temperature to use for sampling.
  • maxlen (OPTIONAL, default=0)
    • Maximum sequence length allowed when sampling new sequences. If 0, the longest sequence length of the training data is maxlen
  • startchar (OPTIONAL, default=B)
  • lr (OPTIONAL, default=0.01)
    • Learning rate to be used for Adam optimizer.
  • modfile (OPTIONAL, default=None)
    • If train=False, a pre-trained model file needs to be provided, e.g. modfile=./checkpoint/model_epoch_49.hdf5.
  • cv (OPTIONAL, default=None)
    • Folds of cross-validation to use for model validation. If None, no cross-validation is performed.
  • window (OPTIONAL, default=0)
    • Size of window to use for enhancing training data by sliding-windows. If 0, all sequences are padded to the length of the longest sequence in the data set.
  • step (OPTIONAL, default=1)
    • Step size to move the sliding window or the prediction target
  • target (OPTIONAL, default=all)
    • whether to learn all proceeding characters or just the last the single next one in sequence
  • target (OPTIONAL, default='all')
    • whether to learn all proceeding characters or just the last 'one' in sequence
  • padlen (OPTIONAL, default=0)
    • number of tailing padding spaces to add to the sequences. If 0, sequences are padded to the length of the longest sequence in the dataset.
  • refs, (OPTIONAL, default=True
    • whether reference sequence sets should be generated for the analysis

Example: pre-training a 2-layer model with 64 neurons on new sequences for 100 epochs

python LSTM_peptides.py --mode pretrain --name train100 --dataset new_sequences.csv --layers 2 --neurons 64 --epochs 100

Example: sampling 100 sequences from a pre-trained model

python LSTM_peptides.py --mode sample --name testsample --modfile pretrained_model/checkpoint/model_epoch_99.hdf5 --sample 100

Example: finetune a pre-trained model on a finetuning set for 10 epochs

python LSTM_peptides.py --mode finetune --name finetune10 --dataset finetune_set.csv --modfile pretrained_model/checkpoint/model_epoch_99.hdf5 --epochs 10

Cite

When using this code for any publication, please cite the following article:

A. T. Müller, J. A. Hiss, G. Schneider, "Recurrent Neural Network Model for Constructive Peptide Design" J. Chem. Inf . Model. 2018, DOI: 10.1021/acs.jcim.7b00414.

lstm_peptides's People

Contributors

alexarnimueller avatar dependabot[bot] avatar rizaozcelik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

lstm_peptides's Issues

Unexpected behavior with boolean program arguments

Hello Alex,

First and foremost, Thanks a lot for this great repository. I have been using the codebase to fine-tune a pre-trained model on a set of peptides and stumbled upon an unexpected behavior and want to bring it up. As suggested in README, I have run LSTM_peptides.py by setting train argument to False and finetune to True, but the code entered the train branch in the main function (line 727) and started pre-training instead of fine-tuning.

I have dived deeper into the code and found out that argparse module parses the value of train argument as a string, i.e. "False", which is, in turn, casted to boolean by Python and evaluated as True. Thus, the if condition in line 727 (if train:) evaluates to True as long as any non-empty string is provided; triggering the pre-training code.

To isolate and reproduce the error, I have created a small script argparse_test.py with the following content.

import argparse
flags = argparse.ArgumentParser()
flags.add_argument("-t", "--train", default=True, help="whether the network should be trained or just sampled from", type=bool)
args = flags.parse_args()
print(args.train)

When I run this script via a Ubuntu 20 terminal with python3 arparse_test.py --train False (python version is 3.8.10) the output is True. In fact, I have experimented with several values (false, None, True) for train and the code output is always True, except for python3 arparse_test.py --train '', i.e. when train is set to an empty string.

I wonder if I am missing something or the suggested fine-tuning command is supported in certain Python versions. If this is unexpected behavior (which might have been affecting many users), indeed, I'd be happy to create a pull request as a fix.

Looking forward to your reply!

TypeError: Variable is unhashable. Instead, use tensor.ref() as the key.

Running LSTM_peptide.py script by using
Tensorflow=2.3.1
Scikitlearn=0.23.2
keras>=2.0.2
progressbar2>=3.34.2
modlamp>=3.3.0
to generate novel peptide. It shows this error
Error
raise TypeError("Variable is unhashable. "
TypeError: Variable is unhashable. Instead, use tensor.ref() as the key.
I tried to fix other issues however I am unable to locate this one

Error in sequence analysis generated in source code

Hello, I've been paying attention to your work recently, but unfortunately, I can't run your code correctly through the examples you provided. When I execute the command python LSTM_peptides.py -- name Train100 -- dataset new_sequences.csv -- layers 2 -- neurons 64 -- epochs 10 sample -- 2, my error goes to the following:

Traceback (most recent call last):
  File "LSTM_peptides.py", line 791, in <module>
    finetune=FLAGS.finetune, references=FLAGS.refs)
  File "LSTM_peptides.py", line 780, in main
    data.analyze_generated(sample, fname=model.logdir + '/analysis_temp' + str(temperature) + '.txt', plot=True)
  File "LSTM_peptides.py", line 400, in analyze_generated
    a.plot_summary(filename=fname[:-4] + '.png')
  File "/home/zh/anaconda3/envs/LSTM_Squence/lib/python3.5/site-packages/modlamp/analysis.py", line 197, in plot_summary
    self.calc_len()
  File "/home/zh/anaconda3/envs/LSTM_Squence/lib/python3.5/site-packages/modlamp/analysis.py", line 177, in calc_len
    d.length()
  File "/home/zh/anaconda3/envs/LSTM_Squence/lib/python3.5/site-packages/modlamp/descriptors.py", line 264, in length
    desc.append(float(len(seq.strip())))
AttributeError: 'list' object has no attribute 'strip'

And some of the grammar in your code is Python 2, and some of the grammar is Python 3. If I don't modify your source code, I won't be able to use Python 2 to create the model, because there will be some unexpected errors in Python 2. I modified some of your code to be Python 3 grammar, so that I can correctly train the model and generate the sequence.
The reason I got the error is because some of the sequences from the d. length () function are strings and some are lists. I don't understand why. Shouldn't the generated sequences be strings? Why is there a list? Please tell me how I can solve this mistake?

About the pseudo-random peptide sequences

Hello Alex,
I recently read your work and was very inspired. I found a way to generate pseudo-random peptide sequences in your code:


self.ran = Random(len(self.generated), np.min(d.descriptor), np.max(d.descriptor)) # generate rand seqs
probas = count_aas(''.join(seq_desc.sequences)).values() # get the aa distribution of training seqs
self.ran.generate_sequences(proba=probas)

But the pseudo-random peptide sequences I generated using this code are completely different from the peptide sequences provided in your appendix. Only 15% of the sequences predicted by the CAMP predictor are AMP, but the pseudo-random peptide sequences you provided exceed 70% Both were predicted to be AMP.
May I know what is this all about? How are your pseudo-random peptide sequences generated?

Looking forward to your reply!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.