Git Product home page Git Product logo

safe's Introduction

🦺 SAFE

Sequential Attachment-based Fragment Embedding (SAFE) is a novel molecular line notation that represents molecules as an unordered sequence of fragment blocks to improve molecule design using generative models.



Paper | Docs | 🤗 Model | 🤗 Training Dataset



PyPI Conda PyPI - Downloads Conda Code license Data License GitHub Repo stars GitHub Repo stars arXiv

test release code-check doc

Overview of SAFE

SAFE is the deep learning molecular representation. It's an encoding leveraging a peculiarity in the decoding schemes of SMILES, to allow representation of molecules as a contiguous sequence of connected fragments. SAFE strings are valid SMILES strings, and thus are able to preserve the same amount of information. The intuitive representation of molecules as an ordered sequence of connected fragments greatly simplifies the following tasks often encountered in molecular design:

  • de novo design
  • superstructure generation
  • scaffold decoration
  • motif extension
  • linker generation
  • scaffold morphing.

The construction of a SAFE strings requires defining a molecular fragmentation algorithm. By default, we use [BRICS], but any other fragmentation algorithm can be used. The image below illustrates the process of building a SAFE string. The resulting string is a valid SMILES that can be read by datamol or RDKit.


News 🚀

💥 2024/01/15 💥

  1. @IanAWatson has a C++ implementation of SAFE in LillyMol that is quite fast and use a custom fragmentation algorithm. Follow the installation instruction on the repo and checkout the docs of the CLI here: docs/Molecule_Tools/SAFE.md

Installation

You can install safe using pip:

pip install safe-mol

You can use conda/mamba:

mamba install -c conda-forge safe-mol

Datasets and Models

Type Name Infos Size Comment
Model datamol-io/safe-gpt 87M params 350M Default model
Training Dataset datamol-io/safe-gpt 1.1B rows 250GB Training dataset
Drug Benchmark Dataset datamol-io/safe-drugs 26 rows 20 kB Benchmarking dataset

Usage

Please refer to the documentation, which contains tutorials for getting started with safe and detailed descriptions of the functions provided, as well as an example of how to get started with SAFE-GPT.

API

We summarize some key functions provided by the safe package below.

Function Description
safe.encode Translates a SMILES string into its corresponding SAFE string.
safe.decode Translates a SAFE string into its corresponding SMILES string. The SAFE decoder just augment RDKit's Chem.MolFromSmiles with an optional correction argument to take care of missing hydrogen bonds.
safe.split Tokenizes a SAFE string to build a generative model.

Examples

Translation between SAFE and SMILES representations

import safe

ibuprofen = "CC(Cc1ccc(cc1)C(C(=O)O)C)C"

# SMILES -> SAFE -> SMILES translation
try:
    ibuprofen_sf = safe.encode(ibuprofen)  # c12ccc3cc1.C3(C)C(=O)O.CC(C)C2
    ibuprofen_smi = safe.decode(ibuprofen_sf, canonical=True)  # CC(C)Cc1ccc(C(C)C(=O)O)cc1
except safe.EncoderError:
    pass
except safe.DecoderError:
    pass

ibuprofen_tokens = list(safe.split(ibuprofen_sf))

Training/Finetuning a (new) model

A command line interface is available to train a new model, please run safe-train --help. You can also provide an existing checkpoint to continue training or finetune on you own dataset.

For example:

safe-train --config <path to config> \
    --model-path <path to model> \
    --tokenizer  <path to tokenizer> \
    --dataset <path to dataset> \
    --num_labels 9 \
    --torch_compile True \
    --optim "adamw_torch" \
    --learning_rate 1e-5 \
    --prop_loss_coeff 1e-3 \
    --gradient_accumulation_steps 1 \
    --output_dir "<path to outputdir>" \
    --max_steps 5

References

If you use this repository, please cite the following related paper:

@misc{noutahi2023gotta,
      title={Gotta be SAFE: A New Framework for Molecular Design},
      author={Emmanuel Noutahi and Cristian Gabellini and Michael Craig and Jonathan S. C Lim and Prudencio Tossou},
      year={2023},
      eprint={2310.10773},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

License

The training dataset is licensed under CC BY 4.0. See DATA_LICENSE for details. This code base is licensed under the Apache-2.0 license. See LICENSE for details.

Note that the model weights of SAFE-GPT are exclusively licensed for research purposes (CC BY-NC 4.0).

Development lifecycle

Setup dev environment

mamba create -n safe -f env.yml
mamba activate safe

pip install --no-deps -e .

Tests

You can run tests locally with:

pytest

safe's People

Contributors

maclandrol avatar hadim avatar zhu0619 avatar mercuryseries avatar kjappelbaum avatar

Stargazers

Alexander Al-Feghali avatar  avatar Dohoon Lee avatar Changwen Xu avatar Michael Ding avatar Thomas Duigou avatar filips avatar Batirov Saparboy avatar Michael Craig avatar  avatar Sejeong Park avatar Jianwen Chen avatar  avatar  avatar José Renato avatar Takshan avatar Avi Peltz avatar Ally avatar Odin Zhang avatar Pengfei avatar lunyang avatar Kiwoong Yoo avatar Jonas Verhellen avatar Simone Marsili avatar Zhimin Zhang avatar  avatar  avatar Anri Lombard avatar Roman Bushuiev avatar  avatar Meirinhos avatar Ayman Farhat avatar chip avatar Shuya Nakata avatar Mu Wang avatar Tony Lin avatar Nicolas Bourriez avatar Hyunjun Ji avatar Daniel McNeela avatar  avatar Ahmet Sarıgün avatar HSILA avatar Adrian Schreyer avatar Zlatko Jončev avatar  avatar Carles Corbi-Verge avatar  avatar Cristian Gabellini avatar Rui Qin avatar  avatar Ulderico Di Caprio avatar Yan Chen avatar Parker Grosjean avatar Jackie Cheng avatar アキラ avatar Gökçe Uludoğan avatar Miroslav Lžičař avatar Martin Salo avatar specialized boy avatar  avatar  avatar Michel Moreau avatar Cas Wognum avatar  avatar

Watchers

 avatar  avatar  avatar  avatar Michel Moreau avatar Kiwoong Yoo avatar  avatar

safe's Issues

Attachment points numbered differently by version

Hello. This might be an oddly specific question.

Running the tutorial Getting Started with SAFE, the decoded example in version 0.1.3 for the safe_fragment[0] is:
'FC(F)(F)c1cc([:2])n([:3])n1'

However, this changes in version 0.1.4, where the safe_fragment[0] is now:
'FC(F)(F)c1cc([:4])n([:3])n1'

Is the new version now reading molecules differently in version 0.1.4?

Identifying the starting number for the bond fragments

Hi Emmanuel, it is Ulrich (I hola you on linkedIn).
Thanks for the amazing work. I wanted to find out if there is a way to identify the starting_number used for the new rings from the safe encoded string (https://github.com/datamol-io/safe/blob/main/safe/converter.py#L330). I want to experiment with a slightly different representation of safe but I want a neat mechanism to convert from safe to my representation and back.

I took this smile as an example, the one in the paper.
repr = encode("O=C(C#CCN1CCCCC1)Nc1ccc2ncnc(Nc3cccc(Br)c3)c2c1", canonical=False)
running the above I get 'O=C7C#CC6.N16CCCCC1.N74.c14ccc2ncnc8c2c1.N85.c15cccc(Br)c1'

The starting_number here is 4 but is there any of the functions I can use to know this.

To do

  • Revisit readme
  • Clean Code
  • build docs
  • models availability
  • training scripts.
  • application demo

mc_labels

Hi, awesome stuff.

what does mc_labels contain? there's no info about it thank you

Mol rings and attachment number with multiple rings

Amazing work guys! Very impressed.

We had an issue with encode func on the following molecule:

import safe as sf
eh=sf.encode('CC1=CC=C(C=C1)C#CC1=NC=C(C=C1)C(F)(F)F')
print(eh)
#Cc1ccc(C#Cc2ccc2cn2)cc1.C2(F)(F)F
#expecting:
#Cc1ccc(C#Cc2ccc3cn2)cc1.C3(F)(F)F
sf.to_image('Cc1ccc(C#Cc2ccc3cn2)cc1.C3(F)(F)F')

Thank you!

output is not deterministic (?)

Thanks for packaging up SAFE in such a nice package!

I've been trying

safe.encode(smiles, seed=42, canonical=True, randomize=False)

with smiles=CC(Cc1ccc(cc1)C(C(=O)O)C)C (using the latest PyPi release)

but receive sometimes c12ccc3cc1.C3(C)C(=O)O.CC(C)C2, othertimes c13ccc2cc1.C2(C)C(=O)O.CC(C)C3.

I realize that the representations are equivalent but wondered if the canonical output was not supposed to be determinstic?

Cheers,
Kevin

Decoding fragment fails on double bond, [possible bug]?

I noticed that if the slicer (in this case BRICS) breaks double bonds, the resulting fragment cannot be properly decoded.

Using such a molecule and following the documentation:

import safe as sf

safe_str = sf.encode("C(=C/c1ccccc1)\CCc1ccccc1", canonical=True)
print(safe_str)

safe_fragment = safe_str.split(".")
sf.decode(safe_fragment[0], as_mol=True)

I get: SAFEDecodeError: Failed to decode C(=2)c1ccccc1

I think the more appropriate output might be C(=[*])c1ccccc1??

Cannot train model from scratch

When running the following script:

config_path="../trainer/configs/small_config.json"
tokenizer_path="../tokenizer.json"
dataset_path="../../Datasets/MOSES/datasets"
output_dir="./trained/SAFE_small"

safe-train --config $config_path \
  --tokenizer $tokenizer_path \
  --dataset $dataset_path \
  --text_column "SAFE" \
  --torch_compile True \
  --optim "adamw_torch" \
  --learning_rate 5e-4 \
  --prop_loss_coeff 1e-3 \
  --gradient_accumulation_steps 1 \
  --output_dir $output_dir \
  --num_labels 9 \
  --num_train_epochs 3 \
  --per_device_train_batch_size 8 \
  --lr_scheduler_type "linear" \
  --warmup_steps 500 \
  --logging_steps 100 \
  --evaluation_strategy "steps" \
  --eval_steps 500 \
  --save_steps 500 \
  --load_best_model_at_end True \
  --metric_for_best_model "eval_loss" \
  --greater_is_better False

I get the error:

Traceback (most recent call last):
File "/home/lmbanr001/.local/bin/safe-train", line 8, in
sys.exit(main())
File "/home/lmbanr001/.local/lib/python3.10/site-packages/safe/trainer/cli.py", line 406, in main
train(model_args, data_args, training_args)
File "/home/lmbanr001/.local/lib/python3.10/site-packages/safe/trainer/cli.py", line 335, in train
trainer = SAFETrainer(
File "/home/lmbanr001/.local/lib/python3.10/site-packages/safe/trainer/trainer_utils.py", line 19, in init
self.accelerator.dispatch_batches = dispatch_batches
AttributeError: can't set attribute 'dispatch_batches'

As I understand it, dispatch_batches is set to true when using another for of ingesting the data. Is there some intuition as to why my code is trying to set dispatch_batches?

Goal-directed generative capabilities

Hi,

Firstly, thank you for making this excellent repository. The SAFE paper has some really interesting insights on molecular design.

As I was going over the codebase, I did not find any codes for the reinforcement learning part. Could you please let me know if I am missing something?
Also, how are you getting the advantage estimates in the ppo loss ? Is there an additional value network ?

Apologies if the questions are too obvious. I would really appreciate you insights on this.

Strange artifacts when wildcart (*) present in SMILES

Hi, I found a few artifacts when dealing with SMILES that contain the wildcard *, specially in rings. In the attached screenshot, you can see that a ring becomes a 'square' after encoding and decoding. This happens when converting from BRICS to SAFE.

Capture

Decoding fragments containing square brackets fail

I am using the decoder to decode individual fragments and I notice that if the fragment smiles contain square brackets then the following decoding fails:

import safe as sf
import datamol as dm

example_failed = """
O=C(CN1CC[NH2+]CC1)N1CCCCC1
[NH3+]Cc1ccccc1
c1cc2c(cc1[C@@H]1CCC[NH2+]1)OCCO2
"""


safe_obj = sf.SAFEConverter(slicer="brics", require_hs=False)

safe_str = sf.encode("c1cc2c(cc1[C@@H]1CCC[NH2+]1)OCCO2", canonical=True)
 safe_fragment = safe_str.split(".")

  with dm.without_rdkit_log():
      for frag in safe_fragment:
            f = safe_obj.decoder(
                frag,
                as_mol=False,
                canonical=False, 
                fix=True,
                remove_dummies=True,
                remove_added_hs=True,
           )
           if f is None: print(frag)

This returns None and I feel this is due to the way square brackets are parsed in the decoder.

Thank you.

Inquiry Regarding Reverse Molecular Design and Comparison of Models

Hi Emmanuel Noutahi,

I trust this message finds you well. I recently came across your article on the impressive performance of the representation SAFE in reverse molecular design. I have a few questions and would greatly appreciate your insights.

Firstly, in your comparison of the performance of different large pretrained models on molecules, I noticed the absence of MOLGPT, which is known for its exceptional performance. Given MOLGPT's ability to conduct conditional generation on targeted fragments or properties, my first question is about the performance comparison between SAFE and MOLGPT (e.g., Table 2).

Secondly, could you shed some light on the comparison between SAFE and MOLGPT in terms of their capabilities for conditional generation on targeted fragments or properties?

Lastly, I am curious about the choice not to employ conditional generation, as seen in MOLGPT, and instead adopt Proximal Policy Optimization (PPO) for goal-directed generative tasks. Additionally, it appears that the PPO-related programs are not open-sourced. Could you provide some insights into the rationale behind this choice?

Thank you in advance for your time and consideration. I look forward to hearing from you soon.

Best,

Yan Chen

Error in Fused Ring Systems

Thanks for the amazing work. I found out that in certain cases the encoding and decoding of SMILES strings is not consistent. For example, for the canonical string:

'CC1CCC2(C)C(CCC3C2CCC2(C)C(C(=O)CO)CCC32)C1'

In the canonical form, this string only has 3 integers, despite having 4 ring systems. The generated fragment has again 4 digits. Thus, the assignment of the attachment integer fails. Once you decode the safe string, you end up with a different molecule that now has a 7 & 4 membered ring and not as in the original a 6 and 5 membered ring.

Here is a small code snippet to reproduce the issue:

import safe 
import datamol as dm
test_string = 'CC1CCC2(C)C(CCC3C2CCC2(C)C(C(=O)CO)CCC32)C1'

output_string = safe.decode(safe.encode(test_string))

print(output_string == test_string)
dm.viz.to_image([dm.to_mol(test_string), dm.to_mol(output_string)])

Original Molecule:
test_string

Decoded Molecule:
output_string

About generation evaluation

Hello, how to eval your model from vaildity, uniqueness and diversity?

I used your tutorial, but get 83% validity for De Novo generation.

embedding layer

Hi,
What is the easiest way to extract the embedding layer so we can get a 768-dim or x-dim for every molecule ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.