lhrrrrrr / cca_series Goto Github PK
View Code? Open in Web Editor NEWA pytorch implementation of DCCA and DCCAE.
A pytorch implementation of DCCA and DCCAE.
cpu
Using 0 GPUs
loading data ...
loading data ...
[ INFO : 2023-04-06 12:01:32,336 ] - DataParallel(
(module): DCCAE(
(encoder1): MlpNet(
(layers): ModuleList(
(0): Sequential(
(0): Linear(in_features=784, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(1): Sequential(
(0): Linear(in_features=1024, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(2): Sequential(
(0): Linear(in_features=1024, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(3): Sequential(
(0): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
(1): Linear(in_features=1024, out_features=10, bias=True)
)
)
)
(encoder2): MlpNet(
(layers): ModuleList(
(0): Sequential(
(0): Linear(in_features=784, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(1): Sequential(
(0): Linear(in_features=1024, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(2): Sequential(
(0): Linear(in_features=1024, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(3): Sequential(
(0): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
(1): Linear(in_features=1024, out_features=10, bias=True)
)
)
)
(decoder1): MlpNet(
(layers): ModuleList(
(0): Sequential(
(0): Linear(in_features=10, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(1): Sequential(
(0): Linear(in_features=1024, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(2): Sequential(
(0): Linear(in_features=1024, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(3): Sequential(
(0): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
(1): Linear(in_features=1024, out_features=784, bias=True)
)
)
)
(decoder2): MlpNet(
(layers): ModuleList(
(0): Sequential(
(0): Linear(in_features=10, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(1): Sequential(
(0): Linear(in_features=1024, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(2): Sequential(
(0): Linear(in_features=1024, out_features=1024, bias=True)
(1): Sigmoid()
(2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
)
(3): Sequential(
(0): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=True)
(1): Linear(in_features=1024, out_features=784, bias=True)
)
)
)
)
)
[ INFO : 2023-04-06 12:01:32,337 ] - RMSprop (
Parameter Group 0
alpha: 0.99
centered: False
eps: 1e-08
foreach: None
lr: 0.001
momentum: 0
weight_decay: 1e-05
)
/Users/zhao/Projects/ai/CCA_Series/Dccae/../objectives.py:141: UserWarning: torch.symeig is deprecated in favor of torch.linalg.eigh and will be removed in a future PyTorch release.
The default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion.
L, _ = torch.symeig(A, upper=upper)
should be replaced with
L = torch.linalg.eigvalsh(A, UPLO='U' if upper else 'L')
and
L, V = torch.symeig(A, eigenvectors=True)
should be replaced with
L, V = torch.linalg.eigh(A, UPLO='U' if upper else 'L') (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1659484780698/work/aten/src/ATen/native/BatchLinearAlgebra.cpp:3041.)
[D1, V1] = torch.symeig(SigmaHat11, eigenvectors=True)
[ INFO : 2023-04-06 12:01:47,858 ] - Epoch 1: val_loss improved from 0.0000 to -5.7144, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:01:47,923 ] - Epoch 1/20 - time: 15.59 - training_loss: -4.9601 - val_loss: -5.7144
[ INFO : 2023-04-06 12:02:02,979 ] - Epoch 2: val_loss improved from -5.7144 to -6.2400, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:02:03,039 ] - Epoch 2/20 - time: 15.12 - training_loss: -5.6013 - val_loss: -6.2400
[ INFO : 2023-04-06 12:02:18,342 ] - Epoch 3: val_loss improved from -6.2400 to -7.0498, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:02:18,404 ] - Epoch 3/20 - time: 15.37 - training_loss: -6.0406 - val_loss: -7.0498
[ INFO : 2023-04-06 12:02:33,191 ] - Epoch 4: val_loss improved from -7.0498 to -7.3024, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:02:33,254 ] - Epoch 4/20 - time: 14.85 - training_loss: -6.4123 - val_loss: -7.3024
[ INFO : 2023-04-06 12:02:48,255 ] - Epoch 5: val_loss improved from -7.3024 to -7.4365, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:02:48,325 ] - Epoch 5/20 - time: 15.07 - training_loss: -6.6911 - val_loss: -7.4365
[ INFO : 2023-04-06 12:03:03,354 ] - Epoch 6: val_loss improved from -7.4365 to -7.5211, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:03:03,408 ] - Epoch 6/20 - time: 15.08 - training_loss: -6.9115 - val_loss: -7.5211
[ INFO : 2023-04-06 12:03:18,382 ] - Epoch 7: val_loss improved from -7.5211 to -7.6173, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:03:18,440 ] - Epoch 7/20 - time: 15.03 - training_loss: -7.0902 - val_loss: -7.6173
[ INFO : 2023-04-06 12:03:33,442 ] - Epoch 8: val_loss improved from -7.6173 to -7.6933, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:03:33,514 ] - Epoch 8/20 - time: 15.07 - training_loss: -7.2408 - val_loss: -7.6933
[ INFO : 2023-04-06 12:03:48,399 ] - Epoch 9: val_loss improved from -7.6933 to -7.7581, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:03:48,461 ] - Epoch 9/20 - time: 14.95 - training_loss: -7.3704 - val_loss: -7.7581
[ INFO : 2023-04-06 12:04:03,122 ] - Epoch 10: val_loss improved from -7.7581 to -7.9082, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:04:03,181 ] - Epoch 10/20 - time: 14.72 - training_loss: -7.4816 - val_loss: -7.9082
[ INFO : 2023-04-06 12:04:18,192 ] - Epoch 11: val_loss did not improve from -7.9082
[ INFO : 2023-04-06 12:04:18,193 ] - Epoch 11/20 - time: 15.01 - training_loss: -7.5805 - val_loss: -7.8961
[ INFO : 2023-04-06 12:04:33,850 ] - Epoch 12: val_loss improved from -7.9082 to -7.9150, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:04:33,905 ] - Epoch 12/20 - time: 15.71 - training_loss: -7.6694 - val_loss: -7.9150
[ INFO : 2023-04-06 12:04:49,641 ] - Epoch 13: val_loss improved from -7.9150 to -7.9408, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:04:49,701 ] - Epoch 13/20 - time: 15.80 - training_loss: -7.7485 - val_loss: -7.9408
[ INFO : 2023-04-06 12:05:05,306 ] - Epoch 14: val_loss did not improve from -7.9408
[ INFO : 2023-04-06 12:05:05,307 ] - Epoch 14/20 - time: 15.61 - training_loss: -7.8191 - val_loss: -7.8902
[ INFO : 2023-04-06 12:05:20,623 ] - Epoch 15: val_loss improved from -7.9408 to -7.9923, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:05:20,715 ] - Epoch 15/20 - time: 15.41 - training_loss: -7.8826 - val_loss: -7.9923
[ INFO : 2023-04-06 12:05:35,985 ] - Epoch 16: val_loss did not improve from -7.9923
[ INFO : 2023-04-06 12:05:35,988 ] - Epoch 16/20 - time: 15.27 - training_loss: -7.9414 - val_loss: -7.9750
[ INFO : 2023-04-06 12:05:51,699 ] - Epoch 17: val_loss improved from -7.9923 to -8.0683, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:05:51,761 ] - Epoch 17/20 - time: 15.77 - training_loss: -8.0033 - val_loss: -8.0683
[ INFO : 2023-04-06 12:06:06,561 ] - Epoch 18: val_loss improved from -8.0683 to -8.0793, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:06:06,691 ] - Epoch 18/20 - time: 14.93 - training_loss: -8.0722 - val_loss: -8.0793
[ INFO : 2023-04-06 12:06:21,838 ] - Epoch 19: val_loss improved from -8.0793 to -8.1336, saving model to DCCAE_checkpoint.model
[ INFO : 2023-04-06 12:06:21,904 ] - Epoch 19/20 - time: 15.21 - training_loss: -8.1463 - val_loss: -8.1336
[ INFO : 2023-04-06 12:06:36,972 ] - Epoch 20: val_loss did not improve from -8.1336
[ INFO : 2023-04-06 12:06:36,973 ] - Epoch 20/20 - time: 15.07 - training_loss: -8.2202 - val_loss: -8.1012
[ INFO : 2023-04-06 12:06:37,898 ] - loss on validation data: -8.1336
[ INFO : 2023-04-06 12:06:38,828 ] - loss on test data: -8.0896
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.