Git Product home page Git Product logo

atomicarchitects / equiformer Goto Github PK

View Code? Open in Web Editor NEW
188.0 5.0 36.0 4.04 MB

[ICLR'23 Spotlight] Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs

Home Page: https://arxiv.org/abs/2206.11990

License: MIT License

Python 88.52% Shell 11.48%
catalyst-design computational-chemistry deep-learning drug-discovery equivariant-graph-neural-network force-fields interatomic-potentials machine-learning molecular-dynamics pytorch

equiformer's People

Contributors

yilunliao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

equiformer's Issues

Error: raise ProxySchemeUnknown(proxy.scheme) urllib3.exceptions.ProxySchemeUnknown: Proxy URL had no scheme, should start with http:// or https://**

error in line 66 of file: datasets/pyg/qm9.py while running sh scripts/train/qm9/equiformer/[email protected]

raw_url = ('https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/'
'molnet_publish/qm9.zip')

What worked for me: remove '' from 'molnet_publish/qm9.zip'

finally :
raw_url = ('https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/molnet_publish/qm9.zip')

[QUESTION]] about EquivariantLayerNormV2

Hi, thanks for your wonderful work.
I encountered a question when reading class EquivariantLayerNormV2 in /nets/layer_norm.py .
On computing the field mean with
field_mean = torch.mean(field, dim=1, keepdim=True) # [batch, mul, 1]] ,
Should dim here be actually -1 ?
Since we also compute field_norm withdim==-1 in next few lines.

Related codes:

for mul, ir in self.irreps:  # mul is the multiplicity (number of copies) of some irrep type (ir)
            d = ir.dim
            field = node_input.narrow(1, ix, mul*d)
            ix += mul * d

            # [batch * sample, mul, repr]
            field = field.reshape(-1, mul, d)

            # For scalars first compute and subtract the mean
            if ir.l == 0 and ir.p == 1:
                # TODO:  here the dim should be -1?
                field_mean = torch.mean(field, dim=1, keepdim=True) # [batch, mul, 1]]
                field = field - field_mean
                
            # Then compute the rescaling factor (norm of each feature vector)
            # Rescaling of the norms themselves based on the option "normalization"
            if self.normalization == 'norm':
                field_norm = field.pow(2).sum(-1)  # [batch * sample, mul]
            elif self.normalization == 'component':
                field_norm = field.pow(2).mean(-1)  # [batch * sample, mul]

Reduce the model size

Hi, thanks for sharing the code, I'd like to try it on my own dataset.

But, unlike MD17 in which the molecules have only 12 atoms, my dataset has more atoms and it'll allocate more GPU memories. May you give me some advice to reduce the model or input size?

train and evaluate batch size are set as 8, only when the process of calculating the force is removed during the test, it will not show OOM

Thank you.

Number of params: 3500609
Epoch: [0][0/2500] 	loss_e: 0.76386, loss_f: 0.26663, e_MAE: 231721.79688, f_MAE: 40633.41797, time/step=1221ms, lr=1.00e-06
Epoch: [0][100/2500] 	loss_e: 0.78266, loss_f: 0.17736, e_MAE: 237424.20057, f_MAE: 26658.75540, time/step=269ms, lr=1.00e-06
Epoch: [0][200/2500] 	loss_e: 0.67438, loss_f: 0.14274, e_MAE: 204576.80185, f_MAE: 21441.58661, time/step=256ms, lr=1.00e-06
Epoch: [0][300/2500] 	loss_e: 0.60979, loss_f: 0.12493, e_MAE: 184983.89528, f_MAE: 18766.71555, time/step=248ms, lr=1.00e-06
Epoch: [0][400/2500] 	loss_e: 0.56140, loss_f: 0.11186, e_MAE: 170304.47533, f_MAE: 16805.37575, time/step=244ms, lr=1.00e-06
Epoch: [0][500/2500] 	loss_e: 0.52590, loss_f: 0.10222, e_MAE: 159535.96707, f_MAE: 15353.32029, time/step=241ms, lr=1.00e-06
Epoch: [0][600/2500] 	loss_e: 0.49673, loss_f: 0.09442, e_MAE: 150684.69430, f_MAE: 14181.63761, time/step=239ms, lr=1.00e-06
Epoch: [0][700/2500] 	loss_e: 0.47570, loss_f: 0.08797, e_MAE: 144305.35255, f_MAE: 13212.08826, time/step=238ms, lr=1.00e-06
Epoch: [0][800/2500] 	loss_e: 0.45563, loss_f: 0.08253, e_MAE: 138218.50663, f_MAE: 12395.78744, time/step=237ms, lr=1.00e-06
Epoch: [0][900/2500] 	loss_e: 0.44249, loss_f: 0.07803, e_MAE: 134231.66208, f_MAE: 11719.96249, time/step=236ms, lr=1.00e-06
Epoch: [0][1000/2500] 	loss_e: 0.42950, loss_f: 0.07414, e_MAE: 130291.51578, f_MAE: 11135.17785, time/step=236ms, lr=1.00e-06
Epoch: [0][1100/2500] 	loss_e: 0.41839, loss_f: 0.07068, e_MAE: 126920.91153, f_MAE: 10616.74550, time/step=235ms, lr=1.00e-06
Epoch: [0][1200/2500] 	loss_e: 0.40806, loss_f: 0.06769, e_MAE: 123787.15601, f_MAE: 10166.87837, time/step=235ms, lr=1.00e-06
Epoch: [0][1300/2500] 	loss_e: 0.39778, loss_f: 0.06493, e_MAE: 120668.85974, f_MAE: 9753.33389, time/step=235ms, lr=1.00e-06
Epoch: [0][1400/2500] 	loss_e: 0.39040, loss_f: 0.06247, e_MAE: 118430.25954, f_MAE: 9383.97292, time/step=235ms, lr=1.00e-06
Epoch: [0][1500/2500] 	loss_e: 0.38277, loss_f: 0.06031, e_MAE: 116114.40569, f_MAE: 9058.41404, time/step=234ms, lr=1.00e-06
Epoch: [0][1600/2500] 	loss_e: 0.37518, loss_f: 0.05833, e_MAE: 113813.39178, f_MAE: 8760.11055, time/step=234ms, lr=1.00e-06
Epoch: [0][1700/2500] 	loss_e: 0.36870, loss_f: 0.05644, e_MAE: 111847.87797, f_MAE: 8476.67351, time/step=234ms, lr=1.00e-06
Epoch: [0][1800/2500] 	loss_e: 0.36322, loss_f: 0.05477, e_MAE: 110185.60253, f_MAE: 8224.51457, time/step=234ms, lr=1.00e-06
Epoch: [0][1900/2500] 	loss_e: 0.35698, loss_f: 0.05320, e_MAE: 108291.43613, f_MAE: 7988.39987, time/step=234ms, lr=1.00e-06
Epoch: [0][2000/2500] 	loss_e: 0.35160, loss_f: 0.05175, e_MAE: 106659.09827, f_MAE: 7770.07496, time/step=233ms, lr=1.00e-06
Epoch: [0][2100/2500] 	loss_e: 0.34805, loss_f: 0.05041, e_MAE: 105582.97466, f_MAE: 7569.16291, time/step=233ms, lr=1.00e-06
Epoch: [0][2200/2500] 	loss_e: 0.34287, loss_f: 0.04914, e_MAE: 104013.07274, f_MAE: 7378.71877, time/step=233ms, lr=1.00e-06
Epoch: [0][2300/2500] 	loss_e: 0.33794, loss_f: 0.04798, e_MAE: 102517.56985, f_MAE: 7204.14332, time/step=233ms, lr=1.00e-06
Epoch: [0][2400/2500] 	loss_e: 0.33326, loss_f: 0.04687, e_MAE: 101095.59328, f_MAE: 7036.94210, time/step=233ms, lr=1.00e-06
Epoch: [0][2499/2500] 	loss_e: 0.32863, loss_f: 0.04580, e_MAE: 99691.93445, f_MAE: 6876.80393, time/step=233ms, lr=1.00e-06
Traceback (most recent call last):
  File "main_aliqm.py", line 489, in <module>
    main(args)
  File "main_aliqm.py", line 236, in main
    val_err, val_loss = evaluate(args=args, model=model, criterion=criterion, 
  File "main_aliqm.py", line 449, in evaluate
    pred_y, pred_dy = model(node_atom=data.z, pos=data.pos, batch=data.batch)
  File "/home/ubuntu/Softwares/anaconda3/envs/rapids/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/ubuntu/Softwares/anaconda3/envs/rapids/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/ubuntu/Projects/equiformer/nets/graph_attention_transformer_aliqm.py", line 319, in forward
    torch.autograd.grad(
  File "/home/ubuntu/Softwares/anaconda3/envs/rapids/lib/python3.8/site-packages/torch/autograd/__init__.py", line 300, in grad
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 44.00 MiB (GPU 0; 23.69 GiB total capacity; 22.38 GiB already allocated; 16.44 MiB free; 22.52 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

../../logger.py导入出现问题

Traceback (most recent call last):
File "main_oc20.py", line 29, in
import oc20.trainer
File "/home/a113/FY/Code/equiformer-master/oc20/trainer/init.py", line 17, in
from .energy_trainer_v2 import EnergyTrainerV2
File "/home/a113/FY/Code/equiformer-master/oc20/trainer/energy_trainer_v2.py", line 26, in
from .base_trainer_v2 import BaseTrainerV2, interpolate_init_relaxed_pos
File "/home/a113/FY/Code/equiformer-master/oc20/trainer/base_trainer_v2.py", line 55, in
from .logger import FileLogger
File "/home/a113/FY/Code/equiformer-master/oc20/trainer/logger.py", line 1
../../logger.py
^
SyntaxError: invalid syntax
在导入的时候为什么会出现这个错误

Can't reproduce MD17 results

Dear authors,

Thank you very much for sharing the neat codebase. I have sccessfully reproduced results of QM9, but I have a problem with MD17 results.

When I ran a training script of MD17 (scripts/train/md17/equiformer/se_l2/[email protected]), I obtained the e_MAE of 0.121 and f_MAE of 0.164 (see the log summary below). In Table 2, Equiformer (Lmax=2) results for aspirin are 5.3 meV (energy) and 7.2 meV/Å (forces). If e_MAE of 0.121 and f_MAE of 0.164 in the log are in eV and eV/Å, they are 121 meV and 164 meV/Å, which are quite wrong. How do I intepret the results in MD17 training? Or is there any problem with the model or training procedure?

Thanks

2024-08-13 06:15:32,263 - logger.py:50 - Namespace(batch_size=8, checkpoint_path=None, clip_grad=None, compute_stats=False, cooldown_epochs=10, data_path='datasets/md17', decay_epochs=30, decay_rate=0.1, drop_path=0.0, energy_weight=1.0, epochs=1500, eval_batch_size=24, evaluate=False, force_weight=80.0, input_irreps='64x0e', lr=0.0005, lr_noise=None, lr_noise_pct=0.67, lr_noise_std=1.0, min_lr=1e-06, model_ema=False, model_ema_decay=0.9999, model_ema_force_cpu=False, model_name='graph_attention_transformer_nonlinear_exp_l2_md17', momentum=0.9, num_basis=32, opt='adamw', opt_betas=None, opt_eps=1e-08, output_dir='models/md17/equiformer/se_l2/target@aspirin/lr@5e-4_wd@1e-6_epochs@1500_w-f2e@[email protected]_exp@32_l2mae-loss', patience_epochs=10, pin_mem=True, print_freq=100, radius=5.0, sched='cosine', seed=1, target='aspirin', test_interval=10, test_max_iter=1000, train_size=950, val_size=50, warmup_epochs=10, warmup_lr=1e-06, weight_decay=1e-06, workers=4)
2024-08-13 06:16:35,751 - logger.py:50 - 
2024-08-13 06:16:35,751 - logger.py:50 - Training set size:   950
2024-08-13 06:16:35,751 - logger.py:50 - Validation set size: 50
2024-08-13 06:16:35,751 - logger.py:50 - Testing set size:    210762

2024-08-13 06:16:35,814 - logger.py:50 - Training set mean: -406737.59375, std: 5.775040149688721

2024-08-13 06:16:44,327 - logger.py:50 - GraphAttentionTransformerMD17(
...
...
...
2024-08-14 15:15:22,537 - logger.py:50 - Epoch: [1498][0/118] 	loss_e: 0.02426, loss_f: 0.00271, e_MAE: 0.14062, f_MAE: 0.00781, time/step=830ms, lr=1.00e-06
2024-08-14 15:16:12,267 - logger.py:50 - Epoch: [1498][100/118] 	loss_e: 0.01886, loss_f: 0.00253, e_MAE: 0.10895, f_MAE: 0.00728, time/step=501ms, lr=1.00e-06
2024-08-14 15:16:20,467 - logger.py:50 - Epoch: [1498][117/118] 	loss_e: 0.01924, loss_f: 0.00252, e_MAE: 0.11106, f_MAE: 0.00726, time/step=498ms, lr=1.00e-06
2024-08-14 15:16:21,219 - logger.py:50 - Epoch: [1498] Target: [aspirin] train_e_MAE: 0.11106, train_f_MAE: 0.00726, val_e_MAE: 0.13125, val_f_MAE: 0.17452, Time: 59.51s
2024-08-14 15:16:21,222 - logger.py:50 - Best -- val_epoch=1328, test_epoch=1299, val_e_MAE: 0.14500, val_f_MAE: 0.17351, test_e_MAE: 0.12179, test_f_MAE: 0.16402

2024-08-14 15:16:22,030 - logger.py:50 - Epoch: [1499][0/118] 	loss_e: 0.02136, loss_f: 0.00246, e_MAE: 0.12500, f_MAE: 0.00709, time/step=804ms, lr=1.00e-06
2024-08-14 15:17:13,041 - logger.py:50 - Epoch: [1499][100/118] 	loss_e: 0.01947, loss_f: 0.00254, e_MAE: 0.11247, f_MAE: 0.00731, time/step=513ms, lr=1.00e-06
2024-08-14 15:17:21,562 - logger.py:50 - Epoch: [1499][117/118] 	loss_e: 0.01951, loss_f: 0.00253, e_MAE: 0.11269, f_MAE: 0.00728, time/step=511ms, lr=1.00e-06
2024-08-14 15:17:22,575 - logger.py:50 - [0/8782] 	e_MAE: 0.08594, f_MAE: 0.15897, time/step=233ms
2024-08-14 15:17:46,171 - logger.py:50 - [100/8782] 	e_MAE: 0.12222, f_MAE: 0.16589, time/step=236ms
2024-08-14 15:18:09,565 - logger.py:50 - [200/8782] 	e_MAE: 0.12034, f_MAE: 0.16369, time/step=235ms
2024-08-14 15:18:35,152 - logger.py:50 - [300/8782] 	e_MAE: 0.12122, f_MAE: 0.16414, time/step=242ms
2024-08-14 15:19:00,732 - logger.py:50 - [400/8782] 	e_MAE: 0.12158, f_MAE: 0.16420, time/step=245ms
2024-08-14 15:19:25,884 - logger.py:50 - [500/8782] 	e_MAE: 0.12192, f_MAE: 0.16435, time/step=247ms
2024-08-14 15:19:49,678 - logger.py:50 - [600/8782] 	e_MAE: 0.12171, f_MAE: 0.16450, time/step=245ms
2024-08-14 15:20:13,032 - logger.py:50 - [700/8782] 	e_MAE: 0.12177, f_MAE: 0.16447, time/step=243ms
2024-08-14 15:20:38,545 - logger.py:50 - [800/8782] 	e_MAE: 0.12175, f_MAE: 0.16425, time/step=245ms
2024-08-14 15:21:03,548 - logger.py:50 - [900/8782] 	e_MAE: 0.12163, f_MAE: 0.16408, time/step=246ms
2024-08-14 15:21:27,730 - logger.py:50 - Epoch: [1499] Target: [aspirin] train_e_MAE: 0.11269, train_f_MAE: 0.00728, val_e_MAE: 0.12250, val_f_MAE: 0.17451, test_e_MAE: 0.12174, test_f_MAE: 0.16416, Time: 306.51s
2024-08-14 15:21:27,731 - logger.py:50 - Best -- val_epoch=1328, test_epoch=1299, val_e_MAE: 0.14500, val_f_MAE: 0.17351, test_e_MAE: 0.12179, test_f_MAE: 0.16402

2024-08-14 15:21:27,979 - logger.py:50 - [0/8782] 	e_MAE: 0.08594, f_MAE: 0.15897, time/step=245ms
2024-08-14 15:21:52,608 - logger.py:50 - [100/8782] 	e_MAE: 0.12222, f_MAE: 0.16589, time/step=246ms
2024-08-14 15:22:18,034 - logger.py:50 - [200/8782] 	e_MAE: 0.12034, f_MAE: 0.16369, time/step=250ms
2024-08-14 15:22:41,241 - logger.py:50 - [300/8782] 	e_MAE: 0.12122, f_MAE: 0.16414, time/step=244ms
2024-08-14 15:23:04,796 - logger.py:50 - [400/8782] 	e_MAE: 0.12158, f_MAE: 0.16420, time/step=242ms
2024-08-14 15:23:29,745 - logger.py:50 - [500/8782] 	e_MAE: 0.12192, f_MAE: 0.16435, time/step=244ms
2024-08-14 15:23:53,084 - logger.py:50 - [600/8782] 	e_MAE: 0.12170, f_MAE: 0.16450, time/step=242ms
2024-08-14 15:24:17,515 - logger.py:50 - [700/8782] 	e_MAE: 0.12177, f_MAE: 0.16447, time/step=242ms
2024-08-14 15:24:41,179 - logger.py:50 - [800/8782] 	e_MAE: 0.12174, f_MAE: 0.16425, time/step=242ms
2024-08-14 15:25:05,296 - logger.py:50 - [900/8782] 	e_MAE: 0.12163, f_MAE: 0.16408, time/step=241ms
2024-08-14 15:25:30,030 - logger.py:50 - [1000/8782] 	e_MAE: 0.12173, f_MAE: 0.16416, time/step=242ms
2024-08-14 15:25:53,342 - logger.py:50 - [1100/8782] 	e_MAE: 0.12194, f_MAE: 0.16424, time/step=241ms
2024-08-14 15:26:18,379 - logger.py:50 - [1200/8782] 	e_MAE: 0.12184, f_MAE: 0.16426, time/step=242ms
2024-08-14 15:26:43,277 - logger.py:50 - [1300/8782] 	e_MAE: 0.12140, f_MAE: 0.16434, time/step=243ms
2024-08-14 15:27:06,617 - logger.py:50 - [1400/8782] 	e_MAE: 0.12134, f_MAE: 0.16450, time/step=242ms
2024-08-14 15:27:30,448 - logger.py:50 - [1500/8782] 	e_MAE: 0.12146, f_MAE: 0.16458, time/step=242ms
2024-08-14 15:27:53,691 - logger.py:50 - [1600/8782] 	e_MAE: 0.12139, f_MAE: 0.16461, time/step=241ms
2024-08-14 15:28:16,919 - logger.py:50 - [1700/8782] 	e_MAE: 0.12141, f_MAE: 0.16459, time/step=241ms
2024-08-14 15:28:40,205 - logger.py:50 - [1800/8782] 	e_MAE: 0.12145, f_MAE: 0.16459, time/step=240ms
2024-08-14 15:29:03,559 - logger.py:50 - [1900/8782] 	e_MAE: 0.12155, f_MAE: 0.16464, time/step=240ms
2024-08-14 15:29:26,802 - logger.py:50 - [2000/8782] 	e_MAE: 0.12157, f_MAE: 0.16471, time/step=239ms
2024-08-14 15:29:50,062 - logger.py:50 - [2100/8782] 	e_MAE: 0.12158, f_MAE: 0.16479, time/step=239ms
2024-08-14 15:30:13,615 - logger.py:50 - [2200/8782] 	e_MAE: 0.12147, f_MAE: 0.16473, time/step=239ms
2024-08-14 15:30:36,899 - logger.py:50 - [2300/8782] 	e_MAE: 0.12163, f_MAE: 0.16487, time/step=239ms
2024-08-14 15:30:59,535 - logger.py:50 - [2400/8782] 	e_MAE: 0.12164, f_MAE: 0.16488, time/step=238ms
2024-08-14 15:31:22,196 - logger.py:50 - [2500/8782] 	e_MAE: 0.12169, f_MAE: 0.16491, time/step=238ms
2024-08-14 15:31:45,702 - logger.py:50 - [2600/8782] 	e_MAE: 0.12185, f_MAE: 0.16487, time/step=238ms
2024-08-14 15:32:11,585 - logger.py:50 - [2700/8782] 	e_MAE: 0.12177, f_MAE: 0.16484, time/step=238ms
2024-08-14 15:32:35,633 - logger.py:50 - [2800/8782] 	e_MAE: 0.12172, f_MAE: 0.16484, time/step=238ms
2024-08-14 15:33:04,192 - logger.py:50 - [2900/8782] 	e_MAE: 0.12175, f_MAE: 0.16490, time/step=240ms
2024-08-14 15:33:31,322 - logger.py:50 - [3000/8782] 	e_MAE: 0.12179, f_MAE: 0.16489, time/step=241ms
2024-08-14 15:33:54,789 - logger.py:50 - [3100/8782] 	e_MAE: 0.12184, f_MAE: 0.16493, time/step=241ms
2024-08-14 15:34:19,558 - logger.py:50 - [3200/8782] 	e_MAE: 0.12189, f_MAE: 0.16493, time/step=241ms
2024-08-14 15:34:43,629 - logger.py:50 - [3300/8782] 	e_MAE: 0.12189, f_MAE: 0.16500, time/step=241ms
2024-08-14 15:35:11,848 - logger.py:50 - [3400/8782] 	e_MAE: 0.12202, f_MAE: 0.16502, time/step=242ms
2024-08-14 15:35:38,457 - logger.py:50 - [3500/8782] 	e_MAE: 0.12195, f_MAE: 0.16490, time/step=243ms
2024-08-14 15:36:01,681 - logger.py:50 - [3600/8782] 	e_MAE: 0.12203, f_MAE: 0.16488, time/step=243ms
2024-08-14 15:36:24,843 - logger.py:50 - [3700/8782] 	e_MAE: 0.12202, f_MAE: 0.16488, time/step=242ms
2024-08-14 15:36:50,773 - logger.py:50 - [3800/8782] 	e_MAE: 0.12208, f_MAE: 0.16486, time/step=243ms
2024-08-14 15:37:14,191 - logger.py:50 - [3900/8782] 	e_MAE: 0.12210, f_MAE: 0.16486, time/step=243ms
2024-08-14 15:37:38,933 - logger.py:50 - [4000/8782] 	e_MAE: 0.12209, f_MAE: 0.16482, time/step=243ms
2024-08-14 15:38:03,202 - logger.py:50 - [4100/8782] 	e_MAE: 0.12202, f_MAE: 0.16479, time/step=243ms
2024-08-14 15:38:29,158 - logger.py:50 - [4200/8782] 	e_MAE: 0.12204, f_MAE: 0.16473, time/step=243ms
2024-08-14 15:38:55,356 - logger.py:50 - [4300/8782] 	e_MAE: 0.12202, f_MAE: 0.16467, time/step=244ms
2024-08-14 15:39:19,485 - logger.py:50 - [4400/8782] 	e_MAE: 0.12205, f_MAE: 0.16469, time/step=244ms
2024-08-14 15:39:44,400 - logger.py:50 - [4500/8782] 	e_MAE: 0.12204, f_MAE: 0.16466, time/step=244ms
2024-08-14 15:40:10,302 - logger.py:50 - [4600/8782] 	e_MAE: 0.12197, f_MAE: 0.16466, time/step=244ms
2024-08-14 15:40:36,153 - logger.py:50 - [4700/8782] 	e_MAE: 0.12198, f_MAE: 0.16465, time/step=244ms
2024-08-14 15:41:01,544 - logger.py:50 - [4800/8782] 	e_MAE: 0.12201, f_MAE: 0.16472, time/step=244ms
2024-08-14 15:41:26,405 - logger.py:50 - [4900/8782] 	e_MAE: 0.12197, f_MAE: 0.16469, time/step=245ms
2024-08-14 15:41:50,966 - logger.py:50 - [5000/8782] 	e_MAE: 0.12195, f_MAE: 0.16475, time/step=245ms
2024-08-14 15:42:17,292 - logger.py:50 - [5100/8782] 	e_MAE: 0.12199, f_MAE: 0.16475, time/step=245ms
2024-08-14 15:42:43,400 - logger.py:50 - [5200/8782] 	e_MAE: 0.12197, f_MAE: 0.16479, time/step=245ms
2024-08-14 15:43:10,288 - logger.py:50 - [5300/8782] 	e_MAE: 0.12192, f_MAE: 0.16469, time/step=246ms
2024-08-14 15:43:36,626 - logger.py:50 - [5400/8782] 	e_MAE: 0.12194, f_MAE: 0.16472, time/step=246ms
2024-08-14 15:44:00,422 - logger.py:50 - [5500/8782] 	e_MAE: 0.12195, f_MAE: 0.16471, time/step=246ms
2024-08-14 15:44:24,016 - logger.py:50 - [5600/8782] 	e_MAE: 0.12193, f_MAE: 0.16470, time/step=246ms
2024-08-14 15:44:50,040 - logger.py:50 - [5700/8782] 	e_MAE: 0.12187, f_MAE: 0.16468, time/step=246ms
2024-08-14 15:45:14,014 - logger.py:50 - [5800/8782] 	e_MAE: 0.12181, f_MAE: 0.16466, time/step=246ms
2024-08-14 15:45:37,430 - logger.py:50 - [5900/8782] 	e_MAE: 0.12179, f_MAE: 0.16464, time/step=246ms
2024-08-14 15:46:01,054 - logger.py:50 - [6000/8782] 	e_MAE: 0.12182, f_MAE: 0.16463, time/step=246ms
2024-08-14 15:46:25,006 - logger.py:50 - [6100/8782] 	e_MAE: 0.12181, f_MAE: 0.16466, time/step=245ms
2024-08-14 15:46:48,584 - logger.py:50 - [6200/8782] 	e_MAE: 0.12178, f_MAE: 0.16465, time/step=245ms
2024-08-14 15:47:11,974 - logger.py:50 - [6300/8782] 	e_MAE: 0.12184, f_MAE: 0.16469, time/step=245ms
2024-08-14 15:47:36,496 - logger.py:50 - [6400/8782] 	e_MAE: 0.12187, f_MAE: 0.16469, time/step=245ms
2024-08-14 15:48:01,346 - logger.py:50 - [6500/8782] 	e_MAE: 0.12183, f_MAE: 0.16470, time/step=245ms
2024-08-14 15:48:25,390 - logger.py:50 - [6600/8782] 	e_MAE: 0.12187, f_MAE: 0.16472, time/step=245ms
2024-08-14 15:48:49,532 - logger.py:50 - [6700/8782] 	e_MAE: 0.12188, f_MAE: 0.16475, time/step=245ms
2024-08-14 15:49:15,980 - logger.py:50 - [6800/8782] 	e_MAE: 0.12188, f_MAE: 0.16473, time/step=245ms
2024-08-14 15:49:41,325 - logger.py:50 - [6900/8782] 	e_MAE: 0.12191, f_MAE: 0.16472, time/step=245ms
2024-08-14 15:50:05,027 - logger.py:50 - [7000/8782] 	e_MAE: 0.12188, f_MAE: 0.16474, time/step=245ms
2024-08-14 15:50:28,611 - logger.py:50 - [7100/8782] 	e_MAE: 0.12184, f_MAE: 0.16472, time/step=245ms
2024-08-14 15:50:51,811 - logger.py:50 - [7200/8782] 	e_MAE: 0.12186, f_MAE: 0.16471, time/step=245ms
2024-08-14 15:51:15,308 - logger.py:50 - [7300/8782] 	e_MAE: 0.12184, f_MAE: 0.16467, time/step=245ms
2024-08-14 15:51:38,965 - logger.py:50 - [7400/8782] 	e_MAE: 0.12180, f_MAE: 0.16467, time/step=245ms
2024-08-14 15:52:02,975 - logger.py:50 - [7500/8782] 	e_MAE: 0.12180, f_MAE: 0.16467, time/step=245ms
2024-08-14 15:52:28,421 - logger.py:50 - [7600/8782] 	e_MAE: 0.12181, f_MAE: 0.16467, time/step=245ms
2024-08-14 15:52:54,411 - logger.py:50 - [7700/8782] 	e_MAE: 0.12183, f_MAE: 0.16472, time/step=245ms
2024-08-14 15:53:20,475 - logger.py:50 - [7800/8782] 	e_MAE: 0.12178, f_MAE: 0.16469, time/step=245ms
2024-08-14 15:53:45,559 - logger.py:50 - [7900/8782] 	e_MAE: 0.12181, f_MAE: 0.16471, time/step=245ms
2024-08-14 15:54:10,333 - logger.py:50 - [8000/8782] 	e_MAE: 0.12183, f_MAE: 0.16469, time/step=245ms
2024-08-14 15:54:34,198 - logger.py:50 - [8100/8782] 	e_MAE: 0.12184, f_MAE: 0.16469, time/step=245ms
2024-08-14 15:54:57,858 - logger.py:50 - [8200/8782] 	e_MAE: 0.12187, f_MAE: 0.16467, time/step=245ms
2024-08-14 15:55:21,719 - logger.py:50 - [8300/8782] 	e_MAE: 0.12183, f_MAE: 0.16463, time/step=245ms
2024-08-14 15:55:47,063 - logger.py:50 - [8400/8782] 	e_MAE: 0.12184, f_MAE: 0.16464, time/step=245ms
2024-08-14 15:56:10,713 - logger.py:50 - [8500/8782] 	e_MAE: 0.12185, f_MAE: 0.16464, time/step=245ms
2024-08-14 15:56:35,121 - logger.py:50 - [8600/8782] 	e_MAE: 0.12187, f_MAE: 0.16467, time/step=245ms
2024-08-14 15:57:01,660 - logger.py:50 - [8700/8782] 	e_MAE: 0.12186, f_MAE: 0.16471, time/step=245ms
2024-08-14 15:57:22,722 - logger.py:50 - [8781/8782] 	e_MAE: 0.12188, f_MAE: 0.16472, time/step=245ms

Question: eV to meV

Hi there,

Great work and a super clean repository! I have a question about the target error values. So, QM9 dataset targets are in eV, whereas the reported errors in the paper are meV. Could you point me where this is happening in the repository?

配置环境---配置.yml文件

UnavailableInvalidChannel: The channel is not accessible or is invalid.
channel name: pyg
channel url: http://mirrors.tuna.tsinghua.edu.cn/anaconda/pyg
error code: 404

You will need to adjust your conda configuration to proceed.
Use conda config --show channels to view your configuration's current state,
and use conda config --show-sources to view config file locations.

配置.yml文件时会出现这种镜像源错误 是什么原因导致的呢?

Smooth decrease in L1 Loss

I noticed that the loss decreases smoothly for the qm9 logs. Any practical tips (like EMA) on how to achieve that?

[Suggestion] creating a standalone general model only repository or be part of e3nn.

Thanks for the great work! This repository is great for those want to reproduce your results. But the equiformer architecture itself is probably more general and versatile that those dataset your tested on. The new "linear, layer norm, DTP and the whole equiformer" modules are probably useful for others wanting to build upon e3nn as well. It will be great if you could separate our the architecture or maybe make it part of the e3nn package.
After a brief check on your nets folder, it seems that the equiformer module is already mostly disentangled from the task specific modification. I guess with slightly more effort to completely separate out the task specific parts and create into a standalone repository or integrate into the e3nn package, this work could have bigger impact.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.