Git Product home page Git Product logo

acos's People

Contributors

blhoy avatar rxiacn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

acos's Issues

pickle.UnpicklingError

When I try to run project (sh run.sh) I get:

Traceback (most recent call last):
File "run_step1.py", line 481, in
main()
File "run_step1.py", line 353, in main
model = model_dict[args.model_type].from_pretrained(args.bert_model, num_labels=num_labels)
File "/home/filip/ACOS/BERT/pytorch_pretrained_BERT/ACOS-main/Extract-Classify-ACOS/modeling.py", line 721, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location='cpu')
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 713, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 920, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.

Does anyone have similar issue?
I modified the corresponding BERT_BASE_DIR, BASE_DIR, DATA_DIR and output_dir.

关于超参数

作者您好,我最近浏览了您的ACOS的工作,并且对ACOS非常感兴趣,并且尝试复现论文中的结果,但使用您的默认的脚本运行的结果与论文中差别较大。是否可以提供详细的超参数包括随机种子?

Issues for run_step2.py

当训练到步骤二的时候,Eval阶段的输出都是0

12/15/2021 19:36:24 - INFO - __main__ -   ***** Running training *****
Epoch:   0%|                                                                                                                                                        | 0/1 [00:00<?, ?it/s]12/15/2021 19:36:32 - INFO - __main__ -   Total Loss is 0.707695484161377 .
12/15/2021 19:36:33 - INFO - __main__ -   Total Loss is 0.17088936269283295 .
12/15/2021 19:36:35 - INFO - __main__ -   Total Loss is 0.11365362256765366 .
12/15/2021 19:36:36 - INFO - __main__ -   Total Loss is 0.09475410729646683 .
12/15/2021 19:36:38 - INFO - __main__ -   Total Loss is 0.10162457078695297 .
12/15/2021 19:36:39 - INFO - __main__ -   Total Loss is 0.0938352420926094 .
12/15/2021 19:36:40 - INFO - __main__ -   Total Loss is 0.10106469690799713 .
12/15/2021 19:36:42 - INFO - __main__ -   Total Loss is 0.1062822937965393 .
12/15/2021 19:36:43 - INFO - __main__ -   Total Loss is 0.10448531806468964 .
12/15/2021 19:36:45 - INFO - __main__ -   Total Loss is 0.09545283764600754 .
12/15/2021 19:36:46 - INFO - __main__ -   Total Loss is 0.08869557082653046 .
12/15/2021 19:36:48 - INFO - __main__ -   Total Loss is 0.09898055344820023 .
12/15/2021 19:36:49 - INFO - __main__ -   Total Loss is 0.096546471118927 .
12/15/2021 19:36:51 - INFO - __main__ -   Total Loss is 0.09676332026720047 .
12/15/2021 19:36:52 - INFO - __main__ -   Total Loss is 0.09095398336648941 .
12/15/2021 19:36:54 - INFO - __main__ -   Total Loss is 0.095858633518219 .
Quad num: 0
tp: 0.0. fp: 0.0. fn: 251.0.
12/15/2021 19:36:54 - INFO - __main__ -   ***** Eval results *****
12/15/2021 19:36:54 - INFO - __main__ -     micro-F1 = 0
12/15/2021 19:36:54 - INFO - __main__ -     precision = 0
12/15/2021 19:36:54 - INFO - __main__ -     recall = 0.0
Quad num: 0
tp: 0.0. fp: 0.0. fn: 895.0.
tp: 0.0. fp: 0.0. fn: 490.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 142.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 98.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 102.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 715.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** category results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 399.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 123.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 85.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 95.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 623.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** sentiment results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 497.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 144.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 101.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 103.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 725.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** category sentiment results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 580.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 153.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 139.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 98.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 804.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** aspect results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 596.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 160.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 144.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 103.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 827.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** category aspect results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 589.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 158.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 141.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 101.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 818.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** sentiment aspect results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 600.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 161.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 145.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 103.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 832.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** category sentiment aspect results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 580.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 150.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 113.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 102.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 811.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** opinion results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 595.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 154.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 116.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 103.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 827.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** category opinion results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 580.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 150.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 114.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 103.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 812.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** sentiment opinion results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 595.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 154.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 117.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 104.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 828.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** category sentiment opinion results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 659.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 167.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 147.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 104.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 895.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** aspect opinion results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 659.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 167.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 147.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 104.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 895.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** category aspect opinion results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 659.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 167.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 147.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 104.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 895.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** sentiment aspect opinion results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
tp: 0.0. fp: 0.0. fn: 659.0.
0 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 167.0.
1 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 147.0.
2 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 104.0.
3 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
tp: 0.0. fp: 0.0. fn: 895.0.
4 :  {'precision': 0, 'recall': 0.0, 'micro-F1': 0}
12/15/2021 19:37:00 - INFO - __main__ -   ***** category sentiment aspect opinion results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.00%
12/15/2021 19:37:00 - INFO - __main__ -   -----------------------------------
12/15/2021 19:37:00 - INFO - __main__ -   ***** Test results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.0
Epoch: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:28<00:00, 28.83s/it]
12/15/2021 19:37:00 - INFO - __main__ -   ***** Test results *****
12/15/2021 19:37:00 - INFO - __main__ -     micro-F1 = 0
12/15/2021 19:37:00 - INFO - __main__ -     precision = 0
12/15/2021 19:37:00 - INFO - __main__ -     recall = 0.0

对于子集的疑问

你好,我最近在测试模型的性能,但是我看到论文中的Effectiveness of Modeling of Implicit Aspects/Opinions部分,将数据集分成了4个子集,但是我对此有点疑问。例如对于一条样本有(IA,EO)和(EA,EO)两种标签,那么这条样本是属于(IA,EO)子集还是(EA,EO)?还是同属于两个?还是说根据标签划分成不同的子集?可否给出代码实现?谢谢

Issues for get_1st_pairs.py

你好,在ACOS/Extract-Classify-ACOS/tokenized_data/get_1st_pairs.py 的第15行中使用的pred4pipeline.txt缺失,请问是否可以提供?

数据集aspect,opinion, AO pair规模问题

作者您好!
感谢你们精彩的工作!复现代码时候碰到一些小问题,例如在Laptop-ACOS (ours)中有关aspect,opinion, AO pair的数量,统计结果如下
{'asp_cont': 4519, 'opi_cont': 4190, 'pair_cont': 3278, 'sentence_num': 4076, 'sentence_with_pair_num': 2285}
与文中的数量偏少,不知是否release错了数据集?或者说没有显示出现在原文中的(NULL)也算作是aspect或者opinion了吗?

关于step2编码问题

最近拜读了论文,尝试运行时,step2一直报utf-8编码问题,尝试了网上大多数修改方法,仍没有解决,请问有办法破吗(悲)

  • def _read_tsv(cls, input_file, quotechar=None):
    
  •     """Reads a tab separated value file."""
    
  •     with open(input_file, "r", encoding="utf-8") as f:
    
  •         reader = csv.reader(f, delimiter="\t", quotechar=quotechar)
    
  •         lines = []
    
  •         for line in reader:
    
  •             if sys.version_info[0] == 2:
    
  •                 line = list(str(cell) for cell in line) 
    
  •             lines.append(line)
    
  •         return lines
    

关于数据集的疑问

i had a great time at jekyll and hyde ! 6,9 RESTAURANT#GENERAL 2 3,4
您好, 我想问一下这句话当中这个,6,9,2,3,4每个数字它代表的含义是什么呀?

environment configuration

您好,打扰了,可以提供一下requirement.txt吗?我对你们的研究很感兴趣,想在本地跑一下

issues for step1 eval_metrics.py

你好,step1\eval_metrics.py line 115-127 生成pred_tag,line 115 for i in range(len(pred_aspect_tag)):此时pred_aspect_tag是前一步 batch处理后生成的,所以长度是最终batch数,且pred_aspect_tag内的数据是tensor形式而不是像前面的cur_quad是列表,cur_aspect_tag = ''.join(str(ele) for ele in pred_aspect_tag[i])是对一批tensor进行了拼接?这里是不是要将tensor转成列表,针对每句aspect id进行拼接并识别其中的“32*”和“54*”呢?

现有方法是不是已经被超过了?

感谢提供ACOS数据集。但是我使用T5large模型,同样的数据集与评价标准,在restaurant上取得0.458的f1,laptop上取得0.462的f1,远远超过论文中的方法.

annotation tools

good work for me.

besides, Are there any open-source annotation tools to annotate quadruple data?

How to prepare inference script from model output

Hello,
I'm getting output like this, But could not get any common pattern to extract and prepare inference script. I want to see specific aspect with corresponding opinion.
i really love their environment , people but service and food taste was worst . a-4,5 a-6,7 a-8,9 a-10,11 o-2,3 o-13,14
delicious chicken fry , brown cream coffee and green tea but worst cheese pizza , beef and chicken salad . a-1,3 a-4,7 a-8,10 a-12,14 a-15,17 a-17,19 o-0,1 o-11,12
the chicken fry was not so good . the green tea was not bad . a-1,3 a-9,11 o-4,7 o-12,14

thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.