Git Product home page Git Product logo

ucch's Introduction

UCCH

Peng Hu, Hongyuan Zhu, Jie Lin, Dezhong Peng, Yin-Ping Zhao, Xi Peng*,Unsupervised Contrastive Cross-modal Hashing, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 45, no. 3, pp. 3877-3889, 1 March 2023, doi: 10.1109/TPAMI.2022.3177356. (PyTorch Code)

Abstract

In this paper, we study how to make unsupervised cross-modal hashing (CMH) benefit from contrastive learning (CL) by overcoming two challenges. To be exact, i) to address the performance degradation issue caused by binary optimization for hashing, we propose a novel momentum optimizer that performs hashing operation learnable in CL, thus making on-the-shelf deep cross-modal hashing possible. In other words, our method does not involve binary-continuous relaxation like most existing methods, thus enjoying better retrieval performance; ii) to alleviate the influence brought by false-negative pairs (FNPs), we propose a Cross-modal Ranking Learning loss (CRL) which utilizes the discrimination from all instead of only the hard negative pairs, where FNP refers to the within-class pairs that were wrongly treated as negative pairs. Thanks to such a global strategy, CRL endows our method with better performance because CRL will not overuse the FNPs while ignoring the true-negative pairs. To the best of our knowledge, the proposed method could be one of the first successful contrastive hashing methods. To demonstrate the effectiveness of the proposed method, we carry out experiments on five widely-used datasets compared with 13 state-of-the-art methods. The code is available at https://github.com/penghu-cs/UCCH.

Framework

Figure 1 The pipeline of the proposed method and we take a bimodal case as an example. In the example, two modality-specific networks learn unified binary representations for different modalities. The outputs of networks directly interact with the hash codes to learn the latent discrimination by using instance-level contrast without continuous relaxation, i.e., contrastive hashing learning (𝓛𝒸). The cross-modal ranking loss 𝓛𝑟 is utilized to bridge cross-modal hashing learning to cross-modal retrieval.

Usage

To train a model with 128 bits on MIRFLICKR-25K, just run UCCH.py:

# Features
python UCCH.py --data_name mirflickr25k_fea --bit 128 --alpha 0.7 --num_hiden_layers 3 2 --margin 0.2 --max_epochs 20 --train_batch_size 256 --shift 0.1 --lr 0.0001 --optimizer Adam

# Raw data
python UCCH.py --data_name mirflickr25k --bit 128 --alpha 0.7 --num_hiden_layers 3 2 --margin 0.2 --max_epochs 20 --train_batch_size 256 --shift 0.1 --lr 0.0001 --optimizer Adam --warmup_epoch 5 --pretrain -a vgg11

You can get outputs as follows:

Epoch: 13 / 20
[================= 70/70 ====================>]  Step: 28ms | Tot: 2s18ms | Loss: 13.205 | LR: 0.0001                                                                                                             
Evaluation:	Img2Txt: 0.75797 	 Txt2Img: 0.759172 	 Avg: 0.758571

Epoch: 14 / 20
[================= 70/70 ====================>]  Step: 28ms | Tot: 1s951ms | Loss: 13.193 | LR: 0.0001                                                                                                            
Evaluation:	Img2Txt: 0.759404 	 Txt2Img: 0.759482 	 Avg: 0.759443

Epoch: 15 / 20
[================= 70/70 ====================>]  Step: 28ms | Tot: 1s965ms | Loss: 13.180 | LR: 0.0001                                                                                                            
Evaluation:	Img2Txt: 0.758604 	 Txt2Img: 0.75909 	 Avg: 0.758847

Epoch: 16 / 20
[================= 70/70 ====================>]  Step: 28ms | Tot: 1s973ms | Loss: 13.170 | LR: 0.0001                                                                                                            
Evaluation:	Img2Txt: 0.758019 	 Txt2Img: 0.757934 	 Avg: 0.757976

Epoch: 17 / 20
[================= 70/70 ====================>]  Step: 28ms | Tot: 1s973ms | Loss: 13.160 | LR: 0.0001                                                                                                            
Evaluation:	Img2Txt: 0.757612 	 Txt2Img: 0.758054 	 Avg: 0.757833

Epoch: 18 / 20
[================= 70/70 ====================>]  Step: 29ms | Tot: 1s968ms | Loss: 13.151 | LR: 0.0001                                                                                                            
Evaluation:	Img2Txt: 0.757199 	 Txt2Img: 0.757834 	 Avg: 0.757517

Epoch: 19 / 20
[================= 70/70 ====================>]  Step: 30ms | Tot: 2s43ms | Loss: 13.144 | LR: 0.0001                                                                                                             
Evaluation:	Img2Txt: 0.757373 	 Txt2Img: 0.757289 	 Avg: 0.757331
Test:	Img2Txt: 0.769567 	 Txt2Img: 0.746658 	 Avg: 0.758112

Comparison with the State-of-the-Art

TABLE 1: Performance comparison in terms of MAP scores on the MIRFLICKR-25K and IAPR TC-12 datasets. The highest score is shown in boldface.

Method MIRFLICKR-25K IAPR TC-12
Image → Text Text → Image Image → Text Text → Image
16 32 64 128 16 32 64 128 16 32 64 128 16 32 64 128
CVH[20] 0.620 0.608 0.594 0.583 0.629 0.615 0.599 0.587 0.392 0.378 0.366 0.353 0.398 0.384 0.372 0.360
LSSH[59] 0.597 0.609 0.606 0.605 0.602 0.598 0.598 0.597 0.372 0.386 0.396 0.404 0.367 0.380 0.392 0.401
CMFH[60] 0.557 0.557 0.556 0.557 0.553 0.553 0.553 0.553 0.312 0.314 0.314 0.315 0.306 0.306 0.306 0.306
FSH[18] 0.581 0.612 0.635 0.662 0.576 0.607 0.635 0.660 0.377 0.392 0.417 0.445 0.383 0.399 0.425 0.451
DLFH[23] 0.638 0.658 0.677 0.684 0.675 0.700 0.718 0.725 0.342 0.358 0.374 0.395 0.358 0.380 0.403 0.434
MTFH[16] 0.507 0.512 0.558 0.554 0.514 0.524 0.518 0.581 0.277 0.324 0.303 0.311 0.294 0.337 0.269 0.297
FOMH[58] 0.575 0.640 0.691 0.659 0.585 0.648 0.719 0.688 0.312 0.316 0.317 0.350 0.311 0.315 0.322 0.373
DCH[34] 0.596 0.602 0.626 0.636 0.612 0.623 0.653 0.665 0.336 0.336 0.344 0.352 0.350 0.358 0.374 0.391
UGACH[61] 0.685 0.693 0.704 0.702 0.673 0.676 0.686 0.690 0.462 0.467 0.469 0.480 0.447 0.463 0.468 0.463
DJSRH[62] 0.652 0.697 0.700 0.716 0.662 0.691 0.683 0.695 0.409 0.412 0.470 0.480 0.418 0.436 0.467 0.478
JDSH[63] 0.724 0.734 0.741 0.745 0.710 0.720 0.733 0.720 0.449 0.472 0.478 0.484 0.447 0.477 0.473 0.486
DGCPN[64] 0.711 0.723 0.737 0.748 0.695 0.707 0.725 0.731 0.465 0.485 0.486 0.495 0.467 0.488 0.491 0.497
UCH[13] 0.654 0.669 0.679 / 0.661 0.667 0.668 / 0.447 0.471 0.485 / 0.446 0.469 0.488 /
UCCH 0.739 0.744 0.754 0.760 0.725 0.725 0.743 0.747 0.478 0.491 0.503 0.508 0.474 0.488 0.503 0.508

Table 2: Performance comparison in terms of MAP scores on the NUS-WIDE and MS-COCO datasets. The highest score is shown in boldface.

Method NUS-WIDE MS-COCO
Image → Text Text → Image Image → Text Text → Image
16 32 64 128 16 32 64 128 16 32 64 128 16 32 64 128
CVH[20] 0.487 0.495 0.456 0.419 0.470 0.475 0.444 0.412 0.503 0.504 0.471 0.425 0.506 0.508 0.476 0.429
LSSH[59] 0.442 0.457 0.450 0.451 0.473 0.482 0.471 0.457 0.484 0.525 0.542 0.551 0.490 0.522 0.547 0.560
CMFH[60] 0.339 0.338 0.343 0.339 0.306 0.306 0.306 0.306 0.366 0.369 0.370 0.365 0.346 0.346 0.346 0.346
FSH[18] 0.557 0.565 0.598 0.635 0.569 0.604 0.651 0.666 0.539 0.549 0.576 0.587 0.537 0.524 0.564 0.573
DLFH[23] 0.385 0.399 0.443 0.445 0.421 0.421 0.462 0.474 0.522 0.580 0.614 0.631 0.444 0.489 0.513 0.534
MTFH[16] 0.297 0.297 0.272 0.328 0.353 0.314 0.399 0.410 0.399 0.293 0.295 0.395 0.335 0.374 0.300 0.334
FOMH[58] 0.305 0.305 0.306 0.314 0.302 0.304 0.300 0.306 0.378 0.514 0.571 0.601 0.368 0.484 0.559 0.595
DCH[34] 0.392 0.422 0.430 0.436 0.379 0.432 0.444 0.459 0.422 0.420 0.446 0.468 0.421 0.428 0.454 0.471
UGACH[61] 0.613 0.623 0.628 0.631 0.603 0.614 0.640 0.641 0.553 0.599 0.598 0.615 0.581 0.605 0.629 0.635
DJSRH[62] 0.502 0.538 0.527 0.556 0.465 0.532 0.538 0.545 0.501 0.563 0.595 0.615 0.494 0.569 0.604 0.622
JDSH[63] 0.647 0.656 0.679 0.680 0.649 0.669 0.689 0.699 0.579 0.628 0.647 0.662 0.578 0.634 0.659 0.672
DGCPN[64] 0.610 0.614 0.635 0.641 0.617 0.621 0.642 0.647 0.552 0.590 0.602 0.596 0.564 0.590 0.597 0.597
UCH[13] / / / / / / / / 0.521 0.534 0.547 / 0.499 0.519 0.545 /
UCCH 0.698 0.708 0.737 0.742 0.701 0.724 0.745 0.750 0.605 0.645 0.655 0.665 0.610 0.655 0.666 0.677

Ablation Study

Table 3: Ablation study on different datasets. The highest score is shown in boldface.

Dataset Method Image → Text Text → Image
16 32 64 128 16 32 64 128
IAPR TC-12 UCCH (with 𝓛𝒸 only) 0.457 0.469 0.478 0.482 0.447 0.469 0.483 0.486
UCCH (with 𝓛'𝑟, 𝑚=0.1 only) 0.410 0.426 0.432 0.438 0.421 0.434 0.461 0.460
UCCH (with 𝓛'𝑟, 𝑚=0.5 only) 0.423 0.446 0.463 0.470 0.434 0.450 0.471 0.479
UCCH (with 𝓛'𝑟, 𝑚=0.9 only) 0.444 0.460 0.472 0.480 0.450 0.472 0.469 0.476
UCCH (with 𝓛𝑟 only) 0.461 0.482 0.496 0.495 0.457 0.476 0.492 0.488
Full UCCH 0.478 0.491 0.503 0.508 0.474 0.488 0.503 0.508
MS-COCO UCCH (with 𝓛𝒸 only) 0.577 0.605 0.621 0.624 0.579 0.610 0.626 0.627
UCCH (with 𝓛'𝑟, 𝑚=0.1 only) 0.495 0.512 0.548 0.555 0.483 0.503 0.534 0.549
UCCH (with 𝓛'𝑟, 𝑚=0.5 only) 0.499 0.525 0.554 0.579 0.498 0.527 0.546 0.566
UCCH (with 𝓛'𝑟, 𝑚=0.9 only) 0.529 0.535 0.554 0.558 0.525 0.545 0.546 0.560
UCCH (with 𝓛𝑟 only) 0.563 0.574 0.599 0.602 0.563 0.576 0.606 0.609
Full UCCH 0.605 0.645 0.655 0.665 0.610 0.655 0.666 0.677

Citation

If you find UCCH useful in your research, please consider citing:

@article{hu2022UCCH,
   title={Unsupervised Contrastive Cross-modal Hashing},
   author={Peng Hu, Hongyuan Zhu, Jie Lin, Dezhong Peng, Yin-Ping Zhao, Xi Peng},
   journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
   year={2023},
   volume={45},
   number={3},
   pages={3877-3889},
   doi={10.1109/TPAMI.2022.3177356}
}

ucch's People

Contributors

penghu-cs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ucch's Issues

data split mistake in your code.

I noticed that there may be some issues with how the dataset is being split. In src/cmdataset.py, line 138 and subsequent 'else' statements, the training dataset may not be properly separated from the retrieval dataset. As a result, I found that the lengths of the train_dataset and retrieval_dataset were the same when I printed them in UCCH.py. This could potentially lead to the model overfitting due to the presence of prior information during training. I kindly request your attention to this matter and would greatly appreciate it if you could look into fixing this.

IAPR-TC12数据集预处理

您好,能否分享一下您是怎么预处理IAPR-TC12数据集的吗,官网下载的数据集没有标签

Hello, I'd like to ask you about some experimental results.

For COCO in 128 bit in paper TABLE 2: ,Img2Txt: 0.665 Txt2Img: 0.677 For Fig. 9 Img2Txt: 0.83 Txt2Img: 0.83 on the validation sets
However, I used your dataset for the experimental results as follows:
Epoch: 18 / 20
[========================================== 457/457
Evaluation
Img2Txt: 0.629952 Txt2Img: 0.628445 Avg: 0.629199
Saving..

Epoch: 19 / 20
[========================================= 457/457
Evaluation
Img2Txt: 0.631842 Txt2Img: 0.630144 Avg: 0.630993
Saving..
Test
Img2Txt: 0.786291 Txt2Img: 0.805052 Avg: 0.795671
Saving..

For IAPR TC-12 in 128 bit in paper TABLE 1,Img2Txt: 0.508 Txt2Img: 0.508 For Fig. 9 Img2Txt: 0.505 Txt2Img: 0.505 on the validation sets
However, I used your dataset for the experimental results as follows:
Epoch: 18 / 20
70/70
Evaluation
Img2Txt: 0.496356 Txt2Img: 0.496224 Avg: 0.49629

Epoch: 19 / 20
70/70
Evaluation
Img2Txt: 0.496524 Txt2Img: 0.496117 Avg: 0.49632
Test
Img2Txt: 0.629495 Txt2Img: 0.630433 Avg: 0.629964

Saving..

The difference is too significant, whether it's the test set or the validation set. Could it be that there's an issue with the way I ran it? Looking forward to your response. Thank you once again.

Some questions about the memory-bank

Hello, in your UCCH, you adopted a momentum update approach similar to MoCo. However, in my experiments, I found that the hyperparameter K has little impact on performance. I'm wondering if you have encountered this issue in your experiments. I'm not sure if this result is unique to my machine. Would choosing a smaller K or considering the removal of the memory bank further enhance efficiency? Here are my experimental results (MIRFLICKR-25K):

128bit
K:256 Img2Txt: 0.768999 Txt2Img: 0.742612 Avg: 0.755805
K:512 Img2Txt: 0.769573 Txt2Img: 0.74065 Avg: 0.755112
K:1024 Img2Txt: 0.769944 Txt2Img: 0.740525 Avg: 0.755235
K:2048 Img2Txt: 0.768865 Txt2Img: 0.742405 Avg: 0.755635
K:4096 Img2Txt: 0.768886 Txt2Img: 0.742765 Avg: 0.755825

16bit:
K:256 Img2Txt: 0.734208 Txt2Img: 0.70426 Avg: 0.719234
K:512 Img2Txt: 0.731359 Txt2Img: 0.702105 Avg: 0.716732
K:1024 Img2Txt: 0.7338 Txt2Img: 0.703864 Avg: 0.718832
k:2048 Img2Txt: 0.734707 Txt2Img: 0.704817 Avg: 0.719762
K:4096 Img2Txt: 0.734259 Txt2Img: 0.706975 Avg: 0.720617

32bit:
K:256 Img2Txt: 0.759809 Txt2Img: 0.734137 Avg: 0.746973
K:512 Img2Txt: 0.759853 Txt2Img: 0.734565 Avg: 0.747209
K:1024 Img2Txt: 0.760381 Txt2Img: 0.735373 Avg: 0.747877
K:2048 Img2Txt: 0.76008 Txt2Img: 0.73348 Avg: 0.74678
K:4096 Img2Txt: 0.759792 Txt2Img: 0.734154 Avg: 0.746973

During validation, are the retrieval set and the query set the same

    (retrieval_imgs, retrieval_txts, retrieval_labs) = eval(retrieval_loader)
    if is_eval:
        query_imgs, query_txts, query_labs = retrieval_imgs[0: 2000], retrieval_txts[0:2000], retrieval_labs[0: 2000]
        retrieval_imgs, retrieval_txts, retrieval_labs = retrieval_imgs[0: 2000], retrieval_txts[0: 2000], retrieval_labs[0:2000]

关于UCCH中fx_calc_map_multilabel_k函数的问题

你好,我在计算map@all时,对UCCH.py中fx_calc_map_multilabel_k函数有一些疑惑。
dist = scipy.spatial.distance.cdist(query, retrieval, metric)
ord = dist.argsort()
numcases = dist.shape[0]
if k == 0:
k = numcases
当k=0时,是不是意味着map@all,但是 numcases = dist.shape[0],这个显然不是。这个返回了query数量,但是k需要等于retrieval数量

dataset

I think this paper is very good. Could you please share your dataset?

Sent from PPHub

IAPR-TC12数据集测试结果

感谢您公开的代码,我在IAPR-TC12数据集上测试时遇到了一些问题,采用以下配置得到了Img2Txt: 0.471749 Txt2Img: 0.468014 Avg: 0.469881准确率,请问我的参数设置是否与您的有什么不同吗?谢谢
模型配置:
parser.add_argument("--data_name", type=str, default="iapr_fea", help="data name")
parser.add_argument('--root_dir', type=str, default='./')
parser.add_argument('--log_name', type=str, default='UCCH')
parser.add_argument('--pretrain', action='store_true', default=False)
parser.add_argument('--pretrain_dir', type=str, default='UCCH')
parser.add_argument('--arch', '-a', metavar='ARCH', default='vgg11', help='model architecture: ' + ' | '.join(['ResNet', 'VGG']) + ' (default: vgg11)')
parser.add_argument('--lr', type=float, default=1e-4)
parser.add_argument('--wd', type=float, default=1e-6)
parser.add_argument('--train_batch_size', type=int, default=256)
parser.add_argument('--eval_batch_size', type=int, default=256)
parser.add_argument('--max_epochs', type=int, default=100)
parser.add_argument('--log_interval', type=int, default=40)
parser.add_argument('--num_workers', type=int, default=5)
parser.add_argument('--resume', default='', type=str, metavar='PATH', help='path to latest checkpoint (default: none)')
parser.add_argument('--num_hiden_layers', default=[3, 2], nargs='+', help=' Number of hiden lyaers')
parser.add_argument('--ls', type=str, default='linear', help='lr scheduler')
parser.add_argument('--bit', type=int, default=32, help='output shape')
parser.add_argument('--optimizer', type=str, default='Adam')
parser.add_argument('--alpha', type=float, default=.9)
parser.add_argument('--momentum', type=float, default=0.4)
parser.add_argument('--K', type=int, default=4096)
parser.add_argument('--T', type=float, default=.9)
parser.add_argument('--shift', type=float, default=1)
parser.add_argument('--margin', type=float, default=.2)
parser.add_argument('--warmup_epoch', type=int, default=1)

训练与测试:
Epoch: 0 / 100
[================= 70/70 ====================>] Step: 42ms | Tot: 3s1ms | Loss: 9.875 | LR: 0.0001
Evaluation
Img2Txt: 0.451901 Txt2Img: 0.451052 Avg: 0.451476
Saving..

Epoch: 1 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s963ms | Loss: 14.796 | LR: 0.0001
Evaluation
Img2Txt: 0.453205 Txt2Img: 0.456005 Avg: 0.454605
Saving..

Epoch: 2 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s930ms | Loss: 14.545 | LR: 0.0001
Evaluation
Img2Txt: 0.458808 Txt2Img: 0.463913 Avg: 0.46136
Saving..

Epoch: 3 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s927ms | Loss: 14.429 | LR: 0.0001
Evaluation
Img2Txt: 0.457709 Txt2Img: 0.458465 Avg: 0.458087

Epoch: 4 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s925ms | Loss: 14.349 | LR: 0.0001
Evaluation
Img2Txt: 0.461355 Txt2Img: 0.462978 Avg: 0.462166
Saving..

Epoch: 5 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s930ms | Loss: 14.284 | LR: 0.0001
Evaluation
Img2Txt: 0.464578 Txt2Img: 0.464815 Avg: 0.464696
Saving..

Epoch: 6 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s920ms | Loss: 14.221 | LR: 0.0001
Evaluation
Img2Txt: 0.46529 Txt2Img: 0.466595 Avg: 0.465942
Saving..

Epoch: 7 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s925ms | Loss: 14.166 | LR: 0.0001
Evaluation
Img2Txt: 0.468146 Txt2Img: 0.467941 Avg: 0.468044
Saving..

Epoch: 8 / 100
[================= 70/70 ====================>] Step: 40ms | Tot: 2s920ms | Loss: 14.112 | LR: 0.0001
Evaluation
Img2Txt: 0.468523 Txt2Img: 0.468249 Avg: 0.468386
Saving..

Epoch: 9 / 100
[================= 70/70 ====================>] Step: 40ms | Tot: 2s921ms | Loss: 14.077 | LR: 0.0001
Evaluation
Img2Txt: 0.469653 Txt2Img: 0.469486 Avg: 0.46957
Saving..

Epoch: 10 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s929ms | Loss: 14.050 | LR: 0.0001
Evaluation
Img2Txt: 0.471134 Txt2Img: 0.470847 Avg: 0.47099
Saving..

Epoch: 11 / 100
[================= 70/70 ====================>] Step: 40ms | Tot: 2s922ms | Loss: 14.028 | LR: 0.0001
Evaluation
Img2Txt: 0.471837 Txt2Img: 0.471591 Avg: 0.471714
Saving..

Epoch: 12 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s923ms | Loss: 14.008 | LR: 0.0001
Evaluation
Img2Txt: 0.471294 Txt2Img: 0.471955 Avg: 0.471624

Epoch: 13 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s919ms | Loss: 13.994 | LR: 0.0001
Evaluation
Img2Txt: 0.472022 Txt2Img: 0.471606 Avg: 0.471814
Saving..

Epoch: 14 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s921ms | Loss: 13.984 | LR: 0.0001
Evaluation
Img2Txt: 0.471669 Txt2Img: 0.47188 Avg: 0.471775

Epoch: 15 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s924ms | Loss: 13.976 | LR: 0.0001
Evaluation
Img2Txt: 0.471544 Txt2Img: 0.471567 Avg: 0.471555

Epoch: 16 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s924ms | Loss: 13.971 | LR: 0.0001
Evaluation
Img2Txt: 0.47193 Txt2Img: 0.47138 Avg: 0.471655

Epoch: 17 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s997ms | Loss: 13.966 | LR: 0.0001
Evaluation
Img2Txt: 0.471721 Txt2Img: 0.471596 Avg: 0.471659

Epoch: 18 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s929ms | Loss: 13.963 | LR: 0.0001
Evaluation
Img2Txt: 0.472012 Txt2Img: 0.471695 Avg: 0.471854
Saving..

Epoch: 19 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s925ms | Loss: 13.960 | LR: 0.0001
Evaluation
Img2Txt: 0.472071 Txt2Img: 0.471604 Avg: 0.471838

Epoch: 20 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s927ms | Loss: 13.958 | LR: 0.0001
Evaluation
Img2Txt: 0.471887 Txt2Img: 0.471635 Avg: 0.471761

Epoch: 21 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s921ms | Loss: 13.957 | LR: 0.0001
Evaluation
Img2Txt: 0.47198 Txt2Img: 0.471801 Avg: 0.471891
Saving..

Epoch: 22 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s922ms | Loss: 13.956 | LR: 0.0001
Evaluation
Img2Txt: 0.471793 Txt2Img: 0.471491 Avg: 0.471642

Epoch: 23 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s927ms | Loss: 13.955 | LR: 0.0001
Evaluation
Img2Txt: 0.472011 Txt2Img: 0.471643 Avg: 0.471827

Epoch: 24 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s923ms | Loss: 13.954 | LR: 0.0001
Evaluation
Img2Txt: 0.471935 Txt2Img: 0.471625 Avg: 0.47178

Epoch: 25 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s924ms | Loss: 13.954 | LR: 0.0001
Evaluation
Img2Txt: 0.471816 Txt2Img: 0.471778 Avg: 0.471797

Epoch: 26 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s923ms | Loss: 13.953 | LR: 0.0001
Evaluation
Img2Txt: 0.471819 Txt2Img: 0.4716 Avg: 0.471709

Epoch: 27 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s920ms | Loss: 13.953 | LR: 0.0001
Evaluation
Img2Txt: 0.47187 Txt2Img: 0.471855 Avg: 0.471863

Epoch: 28 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s917ms | Loss: 13.953 | LR: 0.0001
Evaluation
Img2Txt: 0.471917 Txt2Img: 0.472014 Avg: 0.471966
Saving..

Epoch: 29 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s922ms | Loss: 13.952 | LR: 0.0001
Evaluation
Img2Txt: 0.471872 Txt2Img: 0.471926 Avg: 0.471899

Epoch: 30 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s928ms | Loss: 13.952 | LR: 0.0001
Evaluation
Img2Txt: 0.471692 Txt2Img: 0.471899 Avg: 0.471796

Epoch: 31 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s917ms | Loss: 13.949 | LR: 1e-05
Evaluation
Img2Txt: 0.472117 Txt2Img: 0.471916 Avg: 0.472017
Saving..

Epoch: 32 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s925ms | Loss: 13.948 | LR: 1e-05
Evaluation
Img2Txt: 0.471961 Txt2Img: 0.471928 Avg: 0.471944

Epoch: 33 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s930ms | Loss: 13.947 | LR: 1e-05
Evaluation
Img2Txt: 0.471839 Txt2Img: 0.471905 Avg: 0.471872

Epoch: 34 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s928ms | Loss: 13.947 | LR: 1e-05
Evaluation
Img2Txt: 0.471981 Txt2Img: 0.471981 Avg: 0.471981

Epoch: 35 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s928ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471843 Txt2Img: 0.471965 Avg: 0.471904

Epoch: 36 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s919ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471973 Txt2Img: 0.471943 Avg: 0.471958

Epoch: 37 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s929ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471744 Txt2Img: 0.472002 Avg: 0.471873

Epoch: 38 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s947ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471726 Txt2Img: 0.471979 Avg: 0.471852

Epoch: 39 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s925ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471826 Txt2Img: 0.471953 Avg: 0.47189

Epoch: 40 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s926ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.47182 Txt2Img: 0.471969 Avg: 0.471894

Epoch: 41 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s917ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471834 Txt2Img: 0.471982 Avg: 0.471908

Epoch: 42 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s923ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471956 Txt2Img: 0.471948 Avg: 0.471952

Epoch: 43 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s924ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471862 Txt2Img: 0.471979 Avg: 0.471921

Epoch: 44 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s928ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.47187 Txt2Img: 0.471959 Avg: 0.471915

Epoch: 45 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s926ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471792 Txt2Img: 0.471953 Avg: 0.471873

Epoch: 46 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s926ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.471685 Txt2Img: 0.472025 Avg: 0.471855

Epoch: 47 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s926ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.471827 Txt2Img: 0.472015 Avg: 0.471921

Epoch: 48 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s928ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.472013 Txt2Img: 0.471981 Avg: 0.471997

Epoch: 49 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s922ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471937 Txt2Img: 0.471982 Avg: 0.471959

Epoch: 50 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s919ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.471814 Txt2Img: 0.471958 Avg: 0.471886

Epoch: 51 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s921ms | Loss: 13.946 | LR: 1e-05
Evaluation
Img2Txt: 0.471996 Txt2Img: 0.471978 Avg: 0.471987

Epoch: 52 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s920ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.471771 Txt2Img: 0.471981 Avg: 0.471876

Epoch: 53 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s927ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.47176 Txt2Img: 0.471986 Avg: 0.471873

Epoch: 54 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s922ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.471765 Txt2Img: 0.47202 Avg: 0.471892

Epoch: 55 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s921ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.471874 Txt2Img: 0.472009 Avg: 0.471942

Epoch: 56 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s918ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.471936 Txt2Img: 0.47193 Avg: 0.471933

Epoch: 57 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s931ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.472061 Txt2Img: 0.472083 Avg: 0.472072
Saving..

Epoch: 58 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s918ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.471972 Txt2Img: 0.47204 Avg: 0.472006

Epoch: 59 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s920ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.471962 Txt2Img: 0.472009 Avg: 0.471986

Epoch: 60 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s924ms | Loss: 13.945 | LR: 1e-05
Evaluation
Img2Txt: 0.472024 Txt2Img: 0.472021 Avg: 0.472023

Epoch: 61 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s918ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472109 Txt2Img: 0.472094 Avg: 0.472102
Saving..

Epoch: 62 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s927ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472083 Txt2Img: 0.472026 Avg: 0.472055

Epoch: 63 / 100
[================= 70/70 ====================>] Step: 40ms | Tot: 2s919ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472093 Txt2Img: 0.472024 Avg: 0.472058

Epoch: 64 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s924ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472098 Txt2Img: 0.472027 Avg: 0.472063

Epoch: 65 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s923ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472026 Txt2Img: 0.472036 Avg: 0.472031

Epoch: 66 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s914ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472026 Txt2Img: 0.472036 Avg: 0.472031

Epoch: 67 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s923ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.47203 Txt2Img: 0.472037 Avg: 0.472034

Epoch: 68 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s922ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472082 Txt2Img: 0.472032 Avg: 0.472057

Epoch: 69 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s926ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472035 Txt2Img: 0.472033 Avg: 0.472034

Epoch: 70 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s922ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472031 Txt2Img: 0.47203 Avg: 0.472031

Epoch: 71 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s916ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472021 Txt2Img: 0.472034 Avg: 0.472028

Epoch: 72 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s920ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472082 Txt2Img: 0.472032 Avg: 0.472057

Epoch: 73 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s921ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.47202 Txt2Img: 0.472036 Avg: 0.472028

Epoch: 74 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s927ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.471985 Txt2Img: 0.472042 Avg: 0.472014

Epoch: 75 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s920ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472083 Txt2Img: 0.472036 Avg: 0.472059

Epoch: 76 / 100
[================= 70/70 ====================>] Step: 40ms | Tot: 2s921ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472059 Txt2Img: 0.472036 Avg: 0.472047

Epoch: 77 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s925ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472077 Txt2Img: 0.472035 Avg: 0.472056

Epoch: 78 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s924ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.471933 Txt2Img: 0.472049 Avg: 0.471991

Epoch: 79 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s923ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.471957 Txt2Img: 0.472032 Avg: 0.471995

Epoch: 80 / 100
[================= 70/70 ====================>] Step: 42ms | Tot: 2s928ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.471867 Txt2Img: 0.472035 Avg: 0.471951

Epoch: 81 / 100
[================= 70/70 ====================>] Step: 40ms | Tot: 2s912ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.471913 Txt2Img: 0.472049 Avg: 0.471981

Epoch: 82 / 100
[================= 70/70 ====================>] Step: 40ms | Tot: 2s924ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472036 Txt2Img: 0.472042 Avg: 0.472039

Epoch: 83 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s919ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.471955 Txt2Img: 0.471997 Avg: 0.471976

Epoch: 84 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s917ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472018 Txt2Img: 0.472045 Avg: 0.472032

Epoch: 85 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s927ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 86 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s925ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472036 Txt2Img: 0.472035 Avg: 0.472036

Epoch: 87 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s921ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.47198 Txt2Img: 0.472041 Avg: 0.472011

Epoch: 88 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s927ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.472054 Txt2Img: 0.472003 Avg: 0.472028

Epoch: 89 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s926ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 90 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s918ms | Loss: 13.945 | LR: 1e-06
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 91 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s921ms | Loss: 13.945 | LR: 1e-07
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 92 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s925ms | Loss: 13.945 | LR: 1e-07
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 93 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s932ms | Loss: 13.945 | LR: 1e-07
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 94 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s930ms | Loss: 13.945 | LR: 1e-07
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 95 / 100
[================= 70/70 ====================>] Step: 42ms | Tot: 2s920ms | Loss: 13.945 | LR: 1e-07
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 96 / 100
[================= 70/70 ====================>] Step: 40ms | Tot: 2s917ms | Loss: 13.945 | LR: 1e-07
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 97 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s926ms | Loss: 13.945 | LR: 1e-07
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 98 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s915ms | Loss: 13.945 | LR: 1e-07
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997

Epoch: 99 / 100
[================= 70/70 ====================>] Step: 41ms | Tot: 2s928ms | Loss: 13.945 | LR: 1e-07
Evaluation
Img2Txt: 0.471952 Txt2Img: 0.472043 Avg: 0.471997
Test
Img2Txt: 0.471749 Txt2Img: 0.468014 Avg: 0.469881

Some questions about the techniques in paper

Thanks for your great work!

I have some questions about the techniques presented in paper. In the contrastive function in eq 4, you aim to align the real-valued vector $h_i^x$ and $h_i^y$ to the binary hash code $k_i$. In this training sceme, the final representation of $h_i^{*}$ will have a high similarity score to $k_i$. But the final goal of hashing is to make $sign(h_i^{*})$ similar to $k_i$. I think there is a gap between these two kind of objective and the contrastive loss could not enforce the samples to approximate the hash codes but just enforce them to have high similarity score.

Looking forward to your reply!

I have some questions I'd like to ask.

Great job. Using the command you provided python UCCH.py --data_name mirflickr25k_fea --bit 128 --alpha 0.7 --num_hiden_layers 3 2 --margin 0.2 --max_epochs 20 --train_batch_size 256 --shift 0.1 --lr 0.0001 --optimizer Adam, I was able to achieve nearly similar results (as follows):
Epoch: 19 / 20
[================= 70/70 ====================>] Step: 35ms | Tot: 2s485ms | Loss: 13.143 | LR: 0.0001
Evaluation
Img2Txt: 0.757773 Txt2Img: 0.757578 Avg: 0.757676
Test
Img2Txt: 0.769204 Txt2Img: 0.746224 Avg: 0.757714

However, I noticed that when I start from the Raw data with this command: python UCCH.py --data_name mirflickr25k --bit 128 --alpha 0.7 --num_hiden_layers 3 2 --margin 0.2 --max_epochs 20 --train_batch_size 256 --shift 0.1 --lr 0.0001 --optimizer Adam --pretrain -a vgg11 for training, the result is only about 0.56. The code you provided does not train the backbone, and I have also tried vgg19 and even a pretrained vit, but the results are all about 0.56. This is a significant difference from the results with the vgg extracted features you provided.

I am wondering if you have done any additional processing on the backbone.

Best regards.

Instance-level retrieval (Flickr30K)

Thank you for your very interesting work!

I am currently working on seeing how category-level Cross-Modal Hashing models compare to instance-level fine-grained retrieval models, in an effort to quantify how much of a speed/storage gain is possible via CMH contrasted with how much of a sacrifice there is on recall/precision. The results for instance-retrieval on Flickr30K from Table 4 of the paper show interesting potential, however, I am not able to reproduce them.

The approach I am currently attempting is rather straightforward: Taking the already existing training and evaluation functionality as well as following the same dataset pre-processing, and finally implementing recall@k to the same hash codes on which the existing mAP evaluation is calculated. I see that the paper mentions that you implant your model onto the VSE++ framework for fair comparison. I am wondering what is meant by this in more detail, and what steps within this process may have led to the promising results showcased in the paper compared to my straightforward attempt.

Any other additional insights on the topic of applying your model to the instance-retrieval task would be greatly appreciated.

Many thanks in advance.

没有mirflickr25k-iall.mat文件

运行 python UCCH.py --data_name mirflickr25k --bit 128 --alpha 0.7 --num_hiden_layers 3 2 --margin 0.2 --max_epochs 20 --train_batch_size 256 --shift 0.1 --lr 0.0001 --optimizer Adam --warmup_epoch 5 --pretrain -a vgg11 命令,报 FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = './data/MIRFLICKR25K/mirflickr25k-iall.mat', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0) 错误。
请问可以提供mirflickr25k-iall.mat文件的下载链接吗?谢谢!

pr-curve

Hello, it is very difficult to find the code of drawing pr curve online, so I would like to ask you to draw the code of pr curve. Thank you very much

Sent from PPHub

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.