Git Product home page Git Product logo

kbc's Introduction

Knowledge Base Completion (kbc)

This code reproduces results in Canonical Tensor Decomposition for Knowledge Base Completion (ICML 2018).

Installation

Create a conda environment with pytorch cython and scikit-learn :

conda create --name kbc_env python=3.7
source activate kbc_env
conda install --file requirements.txt -c pytorch

Then install the kbc package to this environment

python setup.py install

Datasets

To download the datasets, go to the kbc/scripts folder and run:

chmod +x download_data.sh
./download_data.sh

Once the datasets are download, add them to the package data folder by running :

python kbc/process_datasets.py

This will create the files required to compute the filtered metrics.

Running the code

Reproduce the results below with the following command :

python kbc/learn.py --dataset FB15K --model ComplEx --rank 500 --optimizer
Adagrad --learning_rate 1e-1 --batch_size 1000 --regularizer N3 --reg 1e-2
 --max_epochs 100 --valid 5

Results

In addition to the results in the paper, here are the performances of ComplEx regularized with the weighted N3 on several datasets, for several dimensions. We use an init scale of 1e-3, a learning rate of 0.1, a batch size of 1000 and 100 max epochs unless specified otherwise. We use the Adagrad optimizer.

FB15k

For rank 2000 : learning rate 1e-2, batch-size 100, max epochs 200.

rank 5 25 50 100 500 2000
MRR 0.36 0.61 0.78 0.83 0.84 0.86
H@1 0.27 0.52 0.73 0.79 0.80 0.83
H@3 0.41 0.67 0.81 0.85 0.87 0.87
H@10 0.55 0.77 0.86 0.89 0.91 0.91
reg 1e-5 1e-5 1e-5 7.5e-4 1e-2 2.5e-3
#Params 163k 815k 1.630M 3.259M 1.630M 65.184M

WN18

Max Epochs : 20

rank 5 8 16 25 50 100 500 2000
MRR 0.19 0.45 0.92 0.94 0.95 0.95 0.95 0.95
H@1 0.14 0.37 0.91 0.94 0.94 0.94 0.94 0.94
H@3 0.20 0.50 0.93 0.94 0.95 0.95 0.95 0.95
H@10 0.29 0.60 0.94 0.95 0.95 0.95 0.96 0.96
reg 1e-3 5e-4 5e-4 1e-3 5e-3 5e-2 5e-2 5e-2
#Params 410k 656k 1.311M 2.049M 4.098M 8.196M 40.979M 163.916M

FB15K-237

Batch Size : 100 (1000 for rank 1000)

rank 5 25 50 100 500 1000 2000
MRR 0.28 0.33 0.34 0.35 0.36 0.37 0.37
H@1 0.20 0.24 0.25 0.26 0.27 0.27 0.27
H@3 0.31 0.36 0.37 0.39 0.40 0.40 0.40
H@10 0.44 0.51 0.52 0.54 0.56 0.56 0.56
reg 5e-4 5e-2 5e-2 5e-2 5e-2 5e-2 5e-2
#Params 150k 751k 1.502M 3.003M 15.015M 30.030M 60.060M

WN18RR

Batch Size : 100 (1000 for rank 8)

rank 5 8 16 25 50 100 500 2000
MRR 0.26 0.36 0.42 0.44 0.46 0.47 0.49 0.49
H@1 0.20 0.38 0.39 0.41 0.43 0.43 0.44 0.44
H@3 0.29 0.38 0.42 0.45 0.47 0.49 0.50 0.50
H@10 0.36 0.41 0.46 0.49 0.52 0.56 0.58 0.58
reg 5e-4 5e-4 5e-2 1e-1 1e-1 1e-1 1e-1 1e-1
#Params 410k 655k 1.311M 2.048M 4.097M 8.193M 40.975M 163.860M

YAGO3-10

rank 5 16 25 50 100 500 1000
MRR 0.15 0.34 0.46 0.54 0.56 0.57 0.58
H@1 0.10 0.26 0.38 0.47 0.49 0.50 0.50
H@3 0.16 0.37 0.50 0.58 0.60 0.62 0.62
H@10 0.25 0.50 0.60 0.67 0.69 0.71 0.71
reg 1e-3 1e-4 5e-3 5e-3 5e-3 5e-3 5e-3
#Params 1.233M 3.944M 6.163M 12.326M 24.652M 123.262M 246.524M

License

kbc is CC-BY-NC licensed, as found in the LICENSE file.

kbc's People

Contributors

timlacroix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kbc's Issues

Asking for preproduce ComplEx on ORIGINAL FB15K

Thank you for your time to fine-tune the new parameters.

I have a problem with FB15K dataset, could you please correct me if I wrong somewhere?

Original FB15K
I can not reproduce the same result on the original FB15K dataset and the dataset that you provide.
This link: [https://everest.hds.utc.fr/lib/exe/fetch.php?media=en:fb15k.tgz]
The MRR score of Valid set reach 0.84728584 at Epoch 115, and then it started to fall.

FB15K
I also tested on the dataset that you provide in this link [https://dl.fbaipublicfiles.com/kbc/data.tar.gz]
(All entities and relations was encoded to integer IDs).
With the new parameters, the model performance can only reach to MRR:0.84741566 at Epoch 116 on the valid set.

What should I do to reproduce the MRR score of 0.86 on the FB15K dataset?

Here this is the code that I used to test.
FB15K - ComplEx - Rank 2000 - reg 2.5e-3
python kbc/learn.py --dataset FB15K --model ComplEx --rank 2000 --optimizer Adagrad --learning_rate 1e-2 --batch_size 100 --regularizer N3 --reg 2.5e-3 --max_epochs 200 --valid 1

This is the log file on the Original FB15K dataset:
Epoch Train(loss) Valid(MRR) Test(MRR)
0 0.28557111 0.21563877 0.21695131
1 0.51367164 0.34545964 0.34569839
2 0.71805871 0.48132567 0.47928540
3 0.82222313 0.56841329 0.56578165
4 0.88000029 0.62548071 0.62345359
5 0.91472250 0.66596788 0.66418925
6 0.93698156 0.69591397 0.69439766
7 0.95358196 0.71943289 0.71805435
8 0.96355888 0.73751086 0.73588541
9 0.97203004 0.75185460 0.75069633
10 0.97879598 0.76432011 0.76255512
11 0.98380214 0.77389356 0.77257067
12 0.98672199 0.78240332 0.78084379
13 0.98983303 0.78926352 0.78804612
14 0.99128643 0.79544601 0.79427382
15 0.99359611 0.80076903 0.79957074
16 0.99423641 0.80527973 0.80403480
17 0.99501967 0.80940974 0.80801624
18 0.99618039 0.81269062 0.81145489
19 0.99689016 0.81547362 0.81428304
20 0.99707615 0.81818026 0.81722468
21 0.99751508 0.82045570 0.81961802
22 0.99799338 0.82271165 0.82185829
23 0.99809900 0.82455397 0.82380486
24 0.99842545 0.82617694 0.82553080
25 0.99881667 0.82775342 0.82701689
26 0.99896574 0.82907116 0.82846227
27 0.99898544 0.83034629 0.82978585
28 0.99923363 0.83151683 0.83103910
29 0.99919987 0.83264852 0.83193302
30 0.99923709 0.83367193 0.83296466
31 0.99939093 0.83448964 0.83397171
32 0.99949011 0.83532438 0.83483064
33 0.99951237 0.83608401 0.83568957
34 0.99949709 0.83680180 0.83645040
35 0.99965772 0.83754161 0.83714667
36 0.99962586 0.83809781 0.83781001
37 0.99970272 0.83862516 0.83830112
38 0.99973676 0.83922428 0.83888388
39 0.99973369 0.83966893 0.83941740
40 0.99973398 0.84024203 0.83985305
41 0.99984252 0.84062093 0.84031960
42 0.99974993 0.84110683 0.84075865
43 0.99980173 0.84145269 0.84105837
44 0.99983624 0.84181148 0.84132946
45 0.99983835 0.84220415 0.84167388
46 0.99983218 0.84245965 0.84199527
47 0.99986276 0.84268475 0.84226802
48 0.99981329 0.84295464 0.84251139
49 0.99989858 0.84327799 0.84273136
50 0.99989870 0.84347713 0.84295285
51 0.99994585 0.84364322 0.84318906
52 0.99992108 0.84382129 0.84346214
53 0.99990213 0.84408641 0.84364283
54 0.99988773 0.84425330 0.84381098
55 0.99993330 0.84451291 0.84402552
56 0.99993637 0.84467125 0.84414390
57 0.99996072 0.84483013 0.84425008
58 0.99990815 0.84486419 0.84439784
59 0.99994701 0.84501472 0.84453636
60 0.99995145 0.84514958 0.84472007
61 0.99997666 0.84520793 0.84484276
62 0.99994531 0.84535003 0.84493870
63 0.99991095 0.84543225 0.84513894
64 0.99995717 0.84551516 0.84519318
65 0.99992430 0.84561467 0.84530640
66 0.99996042 0.84570009 0.84536874
67 0.99996126 0.84582907 0.84548032
68 0.99996677 0.84588811 0.84564543
69 0.99995053 0.84600341 0.84571698
70 0.99996054 0.84607205 0.84568727
71 0.99996501 0.84611568 0.84581268
72 0.99997050 0.84620091 0.84589335
73 0.99995700 0.84630886 0.84599790
74 0.99997500 0.84642345 0.84612584
75 0.99991444 0.84646434 0.84616750
76 0.99997142 0.84654942 0.84629130
77 0.99998251 0.84653986 0.84627274
78 0.99994311 0.84658444 0.84634748
79 0.99996251 0.84666705 0.84637389
80 0.99998498 0.84672299 0.84642160
81 0.99995297 0.84675327 0.84646890
82 1.00000000 0.84674850 0.84651664
83 0.99997333 0.84680906 0.84656042
84 0.99994835 0.84689119 0.84655464
85 0.99996999 0.84693623 0.84662983
86 0.99998507 0.84692633 0.84664306
87 0.99998498 0.84696662 0.84668371
88 0.99996778 0.84699124 0.84675461
89 0.99996498 0.84703335 0.84671801
90 0.99994567 0.84704572 0.84679103
91 0.99997625 0.84705746 0.84683430
92 0.99999079 0.84705797 0.84683645
93 0.99996543 0.84706438 0.84691560
94 0.99998000 0.84705409 0.84686515
95 0.99995849 0.84709358 0.84686705
96 0.99997500 0.84713063 0.84687436
97 0.99998042 0.84716102 0.84692174
98 0.99993712 0.84716469 0.84691164
99 0.99997512 0.84724975 0.84698278
100 0.99998999 0.84722605 0.84696332
101 0.99998501 0.84714711 0.84697026
102 0.99999091 0.84714139 0.84696552
103 1.00000000 0.84712300 0.84702221
104 0.99998251 0.84711015 0.84705314
105 1.00000000 0.84714445 0.84705025
106 0.99995998 0.84717971 0.84705117
107 0.99998999 0.84714749 0.84712243
108 0.99996001 0.84717065 0.84712589
109 0.99996001 0.84720722 0.84714031
110 0.99999499 0.84714127 0.84714335
111 0.99998999 0.84718046 0.84716892
112 0.99996015 0.84724343 0.84717712
113 0.99997124 0.84727567 0.84722853
114 0.99998835 0.84727034 0.84719172
115 1.00000000 0.84728584 0.84723872
116 0.99994627 0.84724098 0.84721979
117 0.99997625 0.84724268 0.84721857
118 0.99996498 0.84724936 0.84721971
119 0.99998999 0.84718457 0.84722593
120 0.99997997 0.84718266 0.84721348
121 0.99997723 0.84711304 0.84713510
122 1.00000000 0.84709558 0.84712282
123 0.99998197 0.84714195 0.84721988
124 0.99996257 0.84714207 0.84715748
125 0.99996999 0.84711239 0.84706751
126 0.99994999 0.84713465 0.84712446
127 0.99999031 0.84710392 0.84714404
128 0.99998999 0.84715080 0.84706330
129 1.00000000 0.84712383 0.84707800
130 1.00000000 0.84713739 0.84704885
131 0.99998498 0.84711719 0.84699729
132 0.99997750 0.84713098 0.84701556
133 0.99999499 0.84707537 0.84698561
134 1.00000000 0.84704778 0.84698290
135 0.99998751 0.84702012 0.84699649
136 0.99999031 0.84702533 0.84692785
137 0.99998248 0.84701875 0.84700808
138 0.99999499 0.84704241 0.84696236
139 0.99999055 0.84700993 0.84698820
140 0.99998063 0.84700748 0.84691477
141 0.99997997 0.84703130 0.84689963
142 1.00000000 0.84700713 0.84691343
143 0.99995500 0.84700537 0.84693551
144 0.99998498 0.84695253 0.84687895
145 0.99998501 0.84691501 0.84682265
146 0.99996090 0.84697160 0.84673926
147 0.99998501 0.84695709 0.84674859
148 0.99995625 0.84690955 0.84676823
149 0.99996501 0.84686524 0.84675056
150 0.99998999 0.84684649 0.84672481
151 0.99997538 0.84680137 0.84672895
152 0.99998003 0.84676564 0.84669957
153 0.99995500 0.84676844 0.84673354
154 1.00000000 0.84674770 0.84668726
155 0.99999249 0.84671041 0.84664011
156 0.99998042 0.84669632 0.84662616
157 0.99996001 0.84671518 0.84662899
158 0.99998501 0.84667584 0.84661323
159 1.00000000 0.84656939 0.84654659
160 0.99997646 0.84660426 0.84652206
161 0.99996710 0.84654680 0.84648311
162 0.99998501 0.84651509 0.84647587
163 0.99994299 0.84647438 0.84639782
164 0.99998748 0.84648508 0.84638712
165 0.99996501 0.84642461 0.84640118
166 0.99997997 0.84636709 0.84640270
167 0.99998501 0.84637338 0.84636074
168 0.99997750 0.84631938 0.84632099
169 0.99997500 0.84631246 0.84629214
170 0.99996504 0.84632692 0.84625468
171 0.99998498 0.84628534 0.84626123
172 0.99998498 0.84625691 0.84622723
173 0.99997255 0.84620216 0.84613714
174 0.99997997 0.84617501 0.84614974
175 0.99998999 0.84615323 0.84610152
176 0.99998999 0.84614855 0.84604591
177 0.99999499 0.84608182 0.84602916
178 1.00000000 0.84608290 0.84602734
179 0.99998999 0.84606841 0.84600395
180 0.99999499 0.84604746 0.84599540
181 1.00000000 0.84599975 0.84594476
182 0.99998999 0.84598336 0.84593174
183 0.99996063 0.84594807 0.84588915
184 1.00000000 0.84592941 0.84582505
185 0.99999008 0.84591907 0.84583986
186 0.99998501 0.84593514 0.84581825
187 0.99996999 0.84590071 0.84578407
188 0.99996501 0.84587952 0.84576884
189 0.99998501 0.84582567 0.84573334
190 0.99997067 0.84579670 0.84567308
191 0.99998999 0.84578952 0.84564203
192 0.99997506 0.84573105 0.84559438
193 0.99998999 0.84570247 0.84556815
194 0.99996999 0.84569108 0.84550563
195 0.99998069 0.84567350 0.84553504
196 0.99999070 0.84568036 0.84546959
197 0.99999499 0.84567553 0.84548214
198 1.00000000 0.84568834 0.84544203
199 0.99999499 0.84565824 0.84539443

This is the log file on your FB15K dataset:
Epoch Valid Test
0 0.21258732 0.21350736
1 0.34424557 0.34381460
2 0.47937773 0.47843648
3 0.56808072 0.56612557
4 0.62420285 0.62284073
5 0.66522732 0.66430059
6 0.69515529 0.69461825
7 0.71935973 0.71856740
8 0.73759723 0.73675600
9 0.75176904 0.75127017
10 0.76360717 0.76314065
11 0.77367082 0.77285811
12 0.78181231 0.78117460
13 0.78893596 0.78825158
14 0.79464367 0.79394436
15 0.79965779 0.79891652
16 0.80416784 0.80320734
17 0.80781418 0.80694491
18 0.81108990 0.81046605
19 0.81407723 0.81354842
20 0.81667709 0.81635964
21 0.81903249 0.81885853
22 0.82109931 0.82091731
23 0.82298642 0.82301471
24 0.82486853 0.82481173
25 0.82649991 0.82624820
26 0.82804829 0.82775813
27 0.82939535 0.82912040
28 0.83069190 0.83039460
29 0.83171308 0.83139932
30 0.83272809 0.83235934
31 0.83349121 0.83344638
32 0.83446184 0.83431569
33 0.83510509 0.83518907
34 0.83589664 0.83589852
35 0.83649659 0.83659431
36 0.83705834 0.83720976
37 0.83776906 0.83783305
38 0.83834016 0.83840394
39 0.83883491 0.83896181
40 0.83931002 0.83936596
41 0.83980522 0.83995181
42 0.84013960 0.84032023
43 0.84050202 0.84060919
44 0.84089246 0.84105113
45 0.84123555 0.84131080
46 0.84165716 0.84168682
47 0.84192261 0.84207213
48 0.84218460 0.84237254
49 0.84245339 0.84263423
50 0.84265190 0.84286454
51 0.84282124 0.84310189
52 0.84299335 0.84333399
53 0.84332195 0.84362227
54 0.84355536 0.84387347
55 0.84373599 0.84405529
56 0.84394130 0.84424263
57 0.84410715 0.84446669
58 0.84429693 0.84459850
59 0.84439135 0.84481570
60 0.84451100 0.84492943
61 0.84467703 0.84497029
62 0.84486023 0.84516272
63 0.84496894 0.84528482
64 0.84508294 0.84539589
65 0.84519145 0.84554064
66 0.84528682 0.84567285
67 0.84537861 0.84575692
68 0.84548092 0.84585649
69 0.84558696 0.84593722
70 0.84573057 0.84610561
71 0.84574851 0.84618628
72 0.84583104 0.84625101
73 0.84597796 0.84630573
74 0.84605747 0.84633306
75 0.84610146 0.84643668
76 0.84621009 0.84655190
77 0.84623879 0.84662759
78 0.84633097 0.84667701
79 0.84634042 0.84666130
80 0.84643418 0.84668970
81 0.84648758 0.84680939
82 0.84650791 0.84684694
83 0.84653121 0.84692636
84 0.84656152 0.84699786
85 0.84661219 0.84709558
86 0.84661749 0.84709668
87 0.84666082 0.84712788
88 0.84670296 0.84719133
89 0.84675634 0.84724995
90 0.84676403 0.84724417
91 0.84676820 0.84733453
92 0.84680352 0.84744552
93 0.84685972 0.84746245
94 0.84689054 0.84749284
95 0.84692457 0.84748578
96 0.84698904 0.84754845
97 0.84690079 0.84756041
98 0.84692711 0.84754947
99 0.84699202 0.84761971
100 0.84704185 0.84763774
101 0.84705469 0.84763399
102 0.84704277 0.84765586
103 0.84707248 0.84767416
104 0.84702805 0.84767899
105 0.84706312 0.84768814
106 0.84715536 0.84767562
107 0.84721795 0.84768403
108 0.84720945 0.84768522
109 0.84722167 0.84773076
110 0.84721404 0.84772056
111 0.84721702 0.84773859
112 0.84728742 0.84770960
113 0.84728926 0.84774402
114 0.84734976 0.84778410
115 0.84731501 0.84779674
116 0.84741566 0.84782439
117 0.84739107 0.84779879
118 0.84729117 0.84781134
119 0.84731171 0.84782970
120 0.84732980 0.84782100
121 0.84726146 0.84783986
122 0.84730437 0.84783909
123 0.84727165 0.84785762
124 0.84723085 0.84784967
125 0.84726924 0.84792364
126 0.84727404 0.84792477
127 0.84726635 0.84792799
128 0.84725103 0.84787238
129 0.84722188 0.84786892
130 0.84721172 0.84784093
131 0.84723634 0.84784409
132 0.84727401 0.84781635
133 0.84727201 0.84776321
134 0.84720200 0.84770805
135 0.84721854 0.84775108
136 0.84722400 0.84771210
137 0.84721756 0.84770700
138 0.84720501 0.84769720
139 0.84722731 0.84772941
140 0.84721914 0.84774718
141 0.84717855 0.84770057
142 0.84712976 0.84770173
143 0.84711733 0.84767681
144 0.84712321 0.84769174
145 0.84708938 0.84766129
146 0.84707320 0.84764475
147 0.84706855 0.84764397
148 0.84703904 0.84761724
149 0.84703434 0.84758499
150 0.84705076 0.84759974
151 0.84702227 0.84754214
152 0.84702975 0.84758264
153 0.84700632 0.84753498
154 0.84693855 0.84754840
155 0.84695029 0.84753019
156 0.84691596 0.84749040
157 0.84689823 0.84740123
158 0.84690204 0.84748235
159 0.84688139 0.84741437
160 0.84681758 0.84740335
161 0.84687313 0.84739602
162 0.84682325 0.84737438
163 0.84683150 0.84737250
164 0.84683901 0.84733865
165 0.84681138 0.84732494
166 0.84679842 0.84732980
167 0.84672916 0.84732679
168 0.84671387 0.84731510
169 0.84667236 0.84730572
170 0.84667927 0.84729102
171 0.84665594 0.84726185
172 0.84665683 0.84726265
173 0.84661898 0.84724274
174 0.84657836 0.84719962
175 0.84661990 0.84719303
176 0.84660980 0.84718305
177 0.84656027 0.84714520
178 0.84653687 0.84712970
179 0.84650657 0.84710813
180 0.84646201 0.84711751
181 0.84648508 0.84707522
182 0.84644800 0.84703219
183 0.84638324 0.84700710
184 0.84635654 0.84697950
185 0.84630013 0.84696463
186 0.84635559 0.84692952
187 0.84632835 0.84690070
188 0.84626627 0.84688720
189 0.84621182 0.84684193
190 0.84622246 0.84685332
191 0.84620795 0.84683934
192 0.84617031 0.84682143
193 0.84614643 0.84678918
194 0.84612688 0.84676951
195 0.84608319 0.84678707
196 0.84606808 0.84675702
197 0.84603623 0.84674743
198 0.84601948 0.84670722
199 0.84600386 0.84666497

Conda channels

Howdy! Thanks for the great work.

I couldn't setup the Conda environment, until I added the intel channel to the Conda install. That is,

conda install --file requirements.txt -c pytorch -c intel

Instead of

conda install --file requirements.txt -c pytorch

Saving trained model for using it in inference

Hi, thank you for developing these amazing models.

May I suggest you to make learn.py save the model in the filesystem during training?
This way one could use a trained model for inference as well; e.g. for my research I am trying to extract deeper information from the model predictions, so I need to perform inference after the training is done.

I have implemented the variation myself, and so far it seems to work. It is a tiny variation of course, but it may be useful to other developers so I'm sharing this here.

If you think it's a nice feature to have you can integrate the code to your repo, or otherwise feel free to just close this issue :)

big_datasets = ['FB15K', 'WN', 'WN18RR', 'FB237', 'YAGO3-10']
datasets = big_datasets

parser = argparse.ArgumentParser(
    description="Relational learning contraption"
)

parser.add_argument(
    '--dataset', choices=datasets,
    help="Dataset in {}".format(datasets)
)

models = ['CP', 'ComplEx']
parser.add_argument(
    '--model', choices=models,
    help="Model in {}".format(models)
)

regularizers = ['N3', 'N2']
parser.add_argument(
    '--regularizer', choices=regularizers, default='N3',
    help="Regularizer in {}".format(regularizers)
)

optimizers = ['Adagrad', 'Adam', 'SGD']
parser.add_argument(
    '--optimizer', choices=optimizers, default='Adagrad',
    help="Optimizer in {}".format(optimizers)
)

parser.add_argument(
    '--max_epochs', default=50, type=int,
    help="Number of epochs."
)
parser.add_argument(
    '--valid', default=3, type=float,
    help="Number of epochs before valid."
)
parser.add_argument(
    '--rank', default=1000, type=int,
    help="Factorization rank."
)
parser.add_argument(
    '--batch_size', default=1000, type=int,
    help="Factorization rank."
)
parser.add_argument(
    '--reg', default=0, type=float,
    help="Regularization weight"
)
parser.add_argument(
    '--init', default=1e-3, type=float,
    help="Initial scale"
)
parser.add_argument(
    '--learning_rate', default=1e-1, type=float,
    help="Learning rate"
)
parser.add_argument(
    '--decay1', default=0.9, type=float,
    help="decay rate for the first moment estimate in Adam"
)
parser.add_argument(
    '--decay2', default=0.999, type=float,
    help="decay rate for second moment estimate in Adam"
)

parser.add_argument('--load', help="path to the model to load")

args = parser.parse_args()

dataset = Dataset(args.dataset)
examples = torch.from_numpy(dataset.get_train().astype('int64'))

model_path = "./models/" + "_".join([args.model, args.dataset]) + ".pt"
if args.load is not None:
    model_path = args.load

print(dataset.get_shape())
model = {
    'CP': lambda: CP(dataset.get_shape(), args.rank, args.init),
    'ComplEx': lambda: ComplEx(dataset.get_shape(), args.rank, args.init),
}[args.model]()

regularizer = {
    'N2': N2(args.reg),
    'N3': N3(args.reg),
}[args.regularizer]

device = 'cuda'
model.to(device)

optim_method = {
    'Adagrad': lambda: optim.Adagrad(model.parameters(), lr=args.learning_rate),
    'Adam': lambda: optim.Adam(model.parameters(), lr=args.learning_rate, betas=(args.decay1, args.decay2)),
    'SGD': lambda: optim.SGD(model.parameters(), lr=args.learning_rate)
}[args.optimizer]()

optimizer = KBCOptimizer(model, regularizer, optim_method, args.batch_size)

if args.load is not None:
    model.load_state_dict(torch.load(model_path))
    model.eval()

def avg_both(mrrs: Dict[str, float], hits: Dict[str, torch.FloatTensor]):
    """
    aggregate metrics for missing lhs and rhs
    :param mrrs: d
    :param hits:
    :return:
    """
    m = (mrrs['lhs'] + mrrs['rhs']) / 2.
    h = (hits['lhs'] + hits['rhs']) / 2.
    return {'MRR': m, 'hits@[1,3,10]': h}


cur_loss = 0
curve = {'train': [], 'valid': [], 'test': []}


for e in range(args.max_epochs):
    cur_loss = optimizer.epoch(examples)

    if (e + 1) % args.valid == 0:
        valid, test, train = [
            avg_both(*dataset.eval(model, split, -1 if split != 'train' else 50000))
            for split in ['valid', 'test', 'train']
        ]

        curve['valid'].append(valid)
        curve['test'].append(test)
        curve['train'].append(train)

        print("\t TRAIN: ", train)
        print("\t VALID : ", valid)

        print("\t saving model...")
        torch.save(model.state_dict(), model_path)
        print("\t done.")

results = dataset.eval(model, 'test', -1)
print("\n\nTEST : ", results)

Thanks again for your great work, and have a nice day!

How I can use my own datasets?

I plus my datasets in /data and then python datasets/process_datasets.py
but I still need change some code in kbc/learning/learn.py, especially the line102 dataset={'FB15K': lambda : big.FB15KDataset(iv, args.prop_kept),
'FB237': lambda : big.FB237Dataset(iv, args.prop_kept),
'WN': lambda : big.WNDataset(iv),
'WN18RR': lambda : big.WN18RRDataset(iv),
'SVO': lambda : big.SVODataset(iv),
'YAGO': lambda : big.YAGO310Dataset(iv),
'UMLS': lambda : small.UMLSDataset(iv),
'MY_DATASET': lambda : ????????(iv),}

Could you teach me in the place of ???????
Thanks a lot.

How to run your code?

I'm sorry to bother you, I want to know how to get the results after creating the files required to compute the filtered metrics. Could you help me?

Can't match performance reported in paper with pytorch code

As advised, I doubled the reg penalty, but I am still not able to match the performance reported in the paper, with the new code.

Complex on FB15k (rank 2000):

python kbc/learn.py --dataset FB15K --model ComplEx --rank 2000 --optimizer Adagrad --learning_rate 1e-2 --batch_size 20000 --regularizer N3 --reg 1e-2 --max_epochs 100 --valid 1

TEST : ({'rhs': 0.8638818860054016, 'lhs': 0.8191012144088745}, {'rhs': tensor([0.8245, 0.8918, 0.9291]), 'lhs': tensor([0.7801, 0.8433, 0.8840])})

Complex on Yago3-10 (rank 1000):

python kbc/learn.py --dataset YAGO3-10 --model ComplEx --rank 1000 --optimizer Adagrad --learning_rate 1e-1 --batch_size 1500 --regularizer N3 --reg 1e-1 --max_epochs 100 --valid 1 

TEST : ({'rhs': 0.46305328607559204, 'lhs': 0.1873369812965393}, {'rhs': tensor([0.3494, 0.5212, 0.6812]), 'lhs': tensor([0.1132, 0.2090, 0.3326])})

Please advise on what may be going wrong.

hyper-parameters of CP

Hi, thanks for your elegant code.
Could you please provide the hyper-parameters for FB15k and FB237 with CP?

kbc package installation fails

Initially I was able to successfully create the environment. But, as I tried installing the kbc packages using python setup.py install, I encountered the following error.

python setup.py install
Please put "# distutils: language=c++" in your .pyx or .pxd file(s)
running install
running bdist_egg
running egg_info
writing kbc.egg-info/PKG-INFO
writing dependency_links to kbc.egg-info/dependency_links.txt
writing top-level names to kbc.egg-info/top_level.txt
reading manifest file 'kbc.egg-info/SOURCES.txt'
writing manifest file 'kbc.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
building 'kbc.lib.bindings' extension
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Wformat -Wformat-security -D_FORTIFY_SOURCE=2 -fstack-protector -O3 -fpic -fPIC -Wformat -Wformat-security -D_FORTIFY_SOURCE=2 -fstack-protector -O3 -fpic -fPIC -fPIC -Ikbc/lib -I/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include -I/home/manuelanayantarajeyaraj/anaconda2/envs/kbc_env1/lib/python3.6/site-packages/numpy/core/include -I/home/manuelanayantarajeyaraj/Desktop/kbc-master/models/ -I/home/manuelanayantarajeyaraj/anaconda2/envs/kbc_env1/include/python3.6m -c kbc/lib/bindings.cpp -o build/temp.linux-x86_64-3.6/kbc/lib/bindings.o -g -std=c++11 -D_GLIBCXX_USE_CXX11_ABI=0
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/manuelanayantarajeyaraj/anaconda2/envs/kbc_env1/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h:1823:0,
                 from /home/manuelanayantarajeyaraj/anaconda2/envs/kbc_env1/lib/python3.6/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
                 from /home/manuelanayantarajeyaraj/anaconda2/envs/kbc_env1/lib/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h:4,
                 from kbc/lib/bindings.cpp:626:
/home/manuelanayantarajeyaraj/anaconda2/envs/kbc_env1/lib/python3.6/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
 #warning "Using deprecated NumPy API, disable it by " \
  ^
In file included from kbc/lib/bindings.cpp:630:0:
kbc/lib/models.hpp:73:21: error: cannot convert ‘const c10::DeviceType’ to ‘c10::Backend’ in initialization
   Backend backend = kCPU;
                     ^
kbc/lib/models.hpp: In member function ‘void kbc::Model::toGPU()’:
kbc/lib/models.hpp:48:13: error: cannot convert ‘const c10::DeviceType’ to ‘c10::Backend’ in assignment
     backend = kCUDA;
             ^
kbc/lib/models.hpp: In member function ‘void kbc::Model::toCPU()’:
kbc/lib/models.hpp:53:13: error: cannot convert ‘const c10::DeviceType’ to ‘c10::Backend’ in assignment
     backend = kCPU;
             ^
In file included from kbc/lib/optimizer.hpp:16:0,
                 from kbc/lib/bindings.cpp:631:
kbc/lib/regularizer.hpp: In member function ‘void kbc::Regularizer::toBackend(c10::Backend)’:
kbc/lib/regularizer.hpp:24:74: error: no matching function for call to ‘c10::Scalar::Scalar(at::Tensor)’
     weight = Scalar(CPU(kFloat).scalarTensor(c_weight).toBackend(backend));
                                                                          ^
In file included from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/core/Type.h:8:0,
                 from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/Type.h:2,
                 from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/Context.h:4,
                 from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/ATen.h:5,
                 from kbc/lib/utils.hpp:11,
                 from kbc/lib/bindings.cpp:629:
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:37:3: note: candidate: c10::Scalar::Scalar(std::complex<double>)
   Scalar(type vv) : tag(Tag::HAS_##member) {             \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:44:3: note: in expansion of macro ‘DEFINE_IMPLICIT_COMPLEX_CTOR’
   DEFINE_IMPLICIT_COMPLEX_CTOR(std::complex<double>,ComplexDouble,z)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:37:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘std::complex<double>’
   Scalar(type vv) : tag(Tag::HAS_##member) {             \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:44:3: note: in expansion of macro ‘DEFINE_IMPLICIT_COMPLEX_CTOR’
   DEFINE_IMPLICIT_COMPLEX_CTOR(std::complex<double>,ComplexDouble,z)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:37:3: note: candidate: c10::Scalar::Scalar(std::complex<float>)
   Scalar(type vv) : tag(Tag::HAS_##member) {             \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:43:3: note: in expansion of macro ‘DEFINE_IMPLICIT_COMPLEX_CTOR’
   DEFINE_IMPLICIT_COMPLEX_CTOR(std::complex<float>,ComplexFloat,z)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:37:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘std::complex<float>’
   Scalar(type vv) : tag(Tag::HAS_##member) {             \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:43:3: note: in expansion of macro ‘DEFINE_IMPLICIT_COMPLEX_CTOR’
   DEFINE_IMPLICIT_COMPLEX_CTOR(std::complex<float>,ComplexFloat,z)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:37:3: note: candidate: c10::Scalar::Scalar(c10::ComplexHalf)
   Scalar(type vv) : tag(Tag::HAS_##member) {             \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:42:3: note: in expansion of macro ‘DEFINE_IMPLICIT_COMPLEX_CTOR’
   DEFINE_IMPLICIT_COMPLEX_CTOR(at::ComplexHalf,ComplexHalf,z)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:37:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘c10::ComplexHalf’
   Scalar(type vv) : tag(Tag::HAS_##member) {             \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:42:3: note: in expansion of macro ‘DEFINE_IMPLICIT_COMPLEX_CTOR’
   DEFINE_IMPLICIT_COMPLEX_CTOR(at::ComplexHalf,ComplexHalf,z)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note: candidate: c10::Scalar::Scalar(double)
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:55:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(double,Double,d)
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘double’
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:55:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(double,Double,d)
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note: candidate: c10::Scalar::Scalar(float)
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:54:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(float,Float,d)   \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘float’
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:54:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(float,Float,d)   \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note: candidate: c10::Scalar::Scalar(c10::Half)
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:53:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(at::Half,Half,d) \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘c10::Half’
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:53:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(at::Half,Half,d) \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note: candidate: c10::Scalar::Scalar(int64_t)
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:52:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(int64_t,Long,i)  \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘int64_t {aka long int}’
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:52:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(int64_t,Long,i)  \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note: candidate: c10::Scalar::Scalar(int)
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:51:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(int,Int,i)       \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘int’
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:51:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(int,Int,i)       \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note: candidate: c10::Scalar::Scalar(int16_t)
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:50:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(int16_t,Short,i) \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘int16_t {aka short int}’
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:50:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(int16_t,Short,i) \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note: candidate: c10::Scalar::Scalar(int8_t)
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:49:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(int8_t,Char,i)   \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘int8_t {aka signed char}’
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:49:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(int8_t,Char,i)   \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note: candidate: c10::Scalar::Scalar(uint8_t)
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:48:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(uint8_t,Byte,i)  \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:25:3: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘uint8_t {aka unsigned char}’
   Scalar(type vv) \
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/ScalarType.h:48:1: note: in expansion of macro ‘DEFINE_IMPLICIT_CTOR’
 _(uint8_t,Byte,i)  \
 ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:32:3: note: in expansion of macro ‘AT_FORALL_SCALAR_TYPES’
   AT_FORALL_SCALAR_TYPES(DEFINE_IMPLICIT_CTOR)
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:22:3: note: candidate: c10::Scalar::Scalar()
   Scalar() : Scalar(int64_t(0)) {}
   ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:22:3: note:   candidate expects 0 arguments, 1 provided
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:20:15: note: candidate: constexpr c10::Scalar::Scalar(const c10::Scalar&)
 class C10_API Scalar {
               ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:20:15: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘const c10::Scalar&’
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:20:15: note: candidate: constexpr c10::Scalar::Scalar(c10::Scalar&&)
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Scalar.h:20:15: note:   no known conversion for argument 1 from ‘at::Tensor’ to ‘c10::Scalar&&’
In file included from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorOptions.h:10:0,
                 from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/core/Type.h:15,
                 from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/Type.h:2,
                 from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/Context.h:4,
                 from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/ATen.h:5,
                 from kbc/lib/utils.hpp:11,
                 from kbc/lib/bindings.cpp:629:
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/util/Optional.h: In instantiation of ‘constexpr c10::constexpr_storage_t<T>::constexpr_storage_t(Args&& ...) [with Args = {c10::Backend&}; T = c10::Device]’:
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/util/Optional.h:294:63:   required from ‘constexpr c10::constexpr_optional_base<T>::constexpr_optional_base(c10::in_place_t, Args&& ...) [with Args = {c10::Backend&}; T = c10::Device]’
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/util/Optional.h:446:71:   required from ‘constexpr c10::optional<T>::optional(c10::in_place_t, Args&& ...) [with Args = {c10::Backend&}; T = c10::Device]’
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorOptions.h:186:18:   required from ‘at::TensorOptions at::TensorOptions::device(Args&& ...) const [with Args = {c10::Backend&}]’
kbc/lib/models.hpp:89:84:   required from here
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/util/Optional.h:241:48: error: no matching function for call to ‘c10::Device::Device(c10::Backend&)’
       : value_(constexpr_forward<Args>(args)...) {}
                                                ^
In file included from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/core/Allocator.h:6:0,
                 from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/Allocator.h:2,
                 from /home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/ATen/ATen.h:3,
                 from kbc/lib/utils.hpp:11,
                 from kbc/lib/bindings.cpp:629:
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/Device.h:52:18: note: candidate: c10::Device::Device(const string&)
   /* implicit */ Device(const std::string& device_string);
                  ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/Device.h:52:18: note:   no known conversion for argument 1 from ‘c10::Backend’ to ‘const string& {aka const std::basic_string<char>&}’
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/Device.h:35:18: note: candidate: c10::Device::Device(c10::DeviceType, c10::DeviceIndex)
   /* implicit */ Device(DeviceType type, DeviceIndex index = -1)
                  ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/Device.h:35:18: note:   no known conversion for argument 1 from ‘c10::Backend’ to ‘c10::DeviceType’
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/Device.h:30:16: note: candidate: constexpr c10::Device::Device(const c10::Device&)
 struct C10_API Device final {
                ^
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/Device.h:30:16: note:   no known conversion for argument 1 from ‘c10::Backend’ to ‘const c10::Device&’
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/Device.h:30:16: note: candidate: constexpr c10::Device::Device(c10::Device&&)
/home/manuelanayantarajeyaraj/.local/lib/python3.6/site-packages/torch/lib/include/c10/Device.h:30:16: note:   no known conversion for argument 1 from ‘c10::Backend’ to ‘c10::Device&&’
error: command 'gcc' failed with exit status 1

Hence, am unable to install these kbc packages and proceed with the datasets there-forth.

Number of parameters of models.

Hello,

I would like to thank you for the great work ! I was wondering whether your opinion the the following reasoning:

  • We ran the same grid for all algorithms and regularizers on the FB15K, FB15K-237, WN18, WN18RR datasets, with a rank set to 2000 for ComplEx, and 4000 for CP.

  • If so, by using the notation provided in "Canonical Tensor Decomposition for Knowledge Base Completion", then the number of parameters for ComplEx and CP would be |N|20002 +|P|20002 and |N|*4000 +|P|*4000.

Cheers,
Caglar Demir

Bug with eval function

I set up the conda enviroment exactly as specified, however, there seems to be an evaluation bug (on the third epoch) when I run the below command.

python3 kbc/learn.py --dataset FB237 --model ComplEx --regularizer N3 --optimizer Adagrad --rank

eval bug

ModuleNotFoundError: No module named 'kbc.lib.bindings'

I followed all the steps reported in readme, I got the following error:

(kbc_env) prachi@gpu01:~/kbc$ python kbc/learning/learn.py --dataset FB15K --model ComplEx --rank 2000 --optimizer Adagrad --learning_rate 1e-2 --batch_size 100 --regularizer L3ComplEx --reg 5e-3 --learn_inverse_rels 1 --max_epochs 100 --valid 1
Traceback (most recent call last):
File "kbc/learning/learn.py", line 16, in
from kbc.lib.bindings import (
ModuleNotFoundError: No module named 'kbc.lib.bindings'

Please help!

ValueError: mean is not a valid value for reduction

Command used:
python kbc/learn.py --dataset FB15K --model ComplEx --rank 2000 --optimizer Adagrad --learning_rate 1e-2 --batch_size 100 --regularizer N3 --reg 5e-3 --max_epochs 10 --valid 1

Pytorch version:

torch.version
'0.4.1.post2'

Error:
Traceback (most recent call last):
File "kbc/learn.py", line 129, in
cur_loss = optimizer.epoch(examples)
File "/pytorch_kbc/kbc/kbc/optimizers.py", line 42, in epoch
l_fit = loss(predictions, truth)
File "/home/prachi/anaconda3/envs/kbc_env2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/prachi/anaconda3/envs/kbc_env2/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 862, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/prachi/anaconda3/envs/kbc_env2/lib/python3.6/site-packages/torch/nn/functional.py", line 1550, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/prachi/anaconda3/envs/kbc_env2/lib/python3.6/site-packages/torch/nn/functional.py", line 1407, in nll_loss
return torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
File "/home/prachi/anaconda3/envs/kbc_env2/lib/python3.6/site-packages/torch/nn/functional.py", line 30, in get_enum
raise ValueError(reduction + " is not a valid value for reduction")
ValueError: mean is not a valid value for reduction

Triple Indices to Triple Labels

Hi!
I have a question about datasets.
After calling python kbc/process_datasets.py and downloading the datasets, I realized that triples are in the "index" format as such:
a triple --> 2431 89 5452.

Where can I find a mapping which maps each of the triple indices that this repository uses, into their true labels?
(e.g. --> /m/07l450 /film/film/genre /m/082gq)

why dataset I download directly from the website without suffix

i directly copy the link in download_data.sh because my remote host's network always connection timeout,so there is no suffix of all data,they just named train,valid,test instead of train.pickle,valid.pickle,test.pickle,so how can i download the dataset with suffix,i will appreciate it if anyone can help me.

Output data

Hello,

my goal is to produce Knowledge Graph embeddings to use later on in some downstream tasks.
So my idea was to use the kbc package for my datasets. The resulting graphs would be small, hence the kbc package seems like a great option! However, as opposed to facebook's PBG I don't see any implemented mechanism for getting the output data (e.g. the HDF5 files, TSV).

Is there any pre-defined approach that you could suggest, something along the lines?

If not, I suppose the only option for me is to start playing around with the kbc code...

Reciprocal learning questions

Hi, first of all many thanks for releasing the code alongside the paper.

On table 2 of the paper we see results with and without reciprocal learning. By taking a look and playing with the code, I realised reciprocal learning seems to be hardcoded to the way the datasets are loaded and the embeddings are defined.

I have two questions then:

  1. Is it possible to run the code without reciprocal learning to fully reproduce the results of table 2?
  2. Is the reciprocal evaluation on the right-hand side only totally comparable to other approaches where there are corruptions to both the subject and the object?

Thanks again.

Experiments with "standard" ComplEx

Hi, I noticed that in your paper you mentioned that there were expeiments you conducted with "standard ComplEx_N3" and the results are better than original ComplEx model. May I ask what's the parameter you used for that model? Are you using F2 regularizer and how do you remove the "reciprocal" setting from your model? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.