Git Product home page Git Product logo

darer's Introduction

DARER

This repository contains the PyTorch source Code for our paper: DARER: Dual-task Temporal Relational Recurrent Reasoning Network for Joint Dialog Sentiment Classification and Act Recognition.

Bowen Xing and Ivor W. Tsang.

ACL 2022 (Findings).

Architectures

DARER's Architecture:

Requirements

Our code relies on Python 3.6 and following libraries:

  • transformers==1.1.0
  • torch-geometric==1.7.0
  • torch==1.5.0
  • tqdm==4.60.0
  • transformers==3.3.1
  • numpy==1.19.2
  • scikit-learn==0.24.2

Run:

LSTM-based Encoder:

DARER/

    # Mastodon //glove
    python -u main.py -lr 1e-3 -l2 1e-8 -dd dataset/mastodon -hd 128 -mc 2 -dr 0.2 -sn 3
  
    # DailyDialog // glove
    python -u main.py -ne 50 -hd 300 -lr 1e-3 -l2 1e-8 -dd dataset/dailydialogue -rnb 10 -sn 2 -mc 5 -dr 0.5
    # DailyDialog // train random word vector 
    python -u main.py -ne 50 -hd 256 -lr 1e-3 -l2 1e-8 -dd dataset/dailydialogue -sn 1 -mc 1e-05 -dr 0.3 -rw

PTLM(pre-trained language model)-based Encoder:

DARER/pre-trained language model/

    # Mastodon // BERT
    python -u main.py -pm bert -bs 16 -sn 4 -dr 0.3 -hd 768 -l2 0.01 -blr 1e-05 -mc 1
    # Mastodon // RoBERTa
    python -u main.py -pm roberta -bs 16 -sn 4 -dr 0.14 -hd 768 -l2 0.0 -blr 1e-05 -mc 1
    # Mastodon // XLNet
    python -u main.py -pm xlnet -bs 12 -sn 4 -dr 0.2 -hd 256 -l2 0.0 -blr 1e-05 -mc 1

We recommend you search the optimal hyper-parameters on your server to obtain the best performances in your own experiment environment.

Citation

If the code is used in your research, please star this repo ^_^ and cite our paper as follows:

@inproceedings{xing-tsang-2022-darer,
    title = "{DARER}: Dual-task Temporal Relational Recurrent Reasoning Network for Joint Dialog Sentiment Classification and Act Recognition",
    author = "Xing, Bowen  and
      Tsang, Ivor",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.findings-acl.286",
    doi = "10.18653/v1/2022.findings-acl.286",
    pages = "3611--3621",
}

darer's People

Contributors

xingbowen714 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

tazeek naitjc

darer's Issues

数据集问题

你好,想请问一下两个数据集中的sentiment,act中的数字分别代表了什么含义

The results in the paper cannot be reproduced

The reproduced results are quite different from the results given in the paper. I tried to adjust the parameters, and the results have improved, but there is still a gap with the results in the paper. Based on this problem, do you have any good suggestions?

ImportError: libtorch_cpu.so: cannot open shared object file: No such file or directory

(xiaoting) test@test-X785-G30:~/liuluyao/xiaoting/DARER-main$ python -u main.py -lr 1e-3 -l2 1e-8 -dd dataset/mastodon -hd 128 -mc 2 -dr 0.2 -sn 3
Traceback (most recent call last):
File "main.py", line 9, in
from nn import TaggingAgent
File "/home/test/liuluyao/xiaoting/DARER-main/nn/init.py", line 1, in
from nn.model import TaggingAgent
File "/home/test/liuluyao/xiaoting/DARER-main/nn/model.py", line 7, in
from nn.encode import BiGraphEncoder
File "/home/test/liuluyao/xiaoting/DARER-main/nn/encode.py", line 2, in
from torch_geometric.nn import RGCNConv
File "/home/test/anaconda3/envs/xiaoting/lib/python3.6/site-packages/torch_geometric/init.py", line 5, in
import torch_geometric.data
File "/home/test/anaconda3/envs/xiaoting/lib/python3.6/site-packages/torch_geometric/data/init.py", line 1, in
from .data import Data
File "/home/test/anaconda3/envs/xiaoting/lib/python3.6/site-packages/torch_geometric/data/data.py", line 8, in
from torch_sparse import coalesce, SparseTensor
File "/home/test/anaconda3/envs/xiaoting/lib/python3.6/site-packages/torch_sparse/init.py", line 2, in
from .coalesce import coalesce
File "/home/test/anaconda3/envs/xiaoting/lib/python3.6/site-packages/torch_sparse/coalesce.py", line 2, in
import torch_scatter
File "/home/test/anaconda3/envs/xiaoting/lib/python3.6/site-packages/torch_scatter/init.py", line 3, in
from .mul import scatter_mul
File "/home/test/anaconda3/envs/xiaoting/lib/python3.6/site-packages/torch_scatter/mul.py", line 3, in
from torch_scatter.utils.ext import get_func
File "/home/test/anaconda3/envs/xiaoting/lib/python3.6/site-packages/torch_scatter/utils/ext.py", line 2, in
import torch_scatter.scatter_cpu
ImportError: libtorch_cpu.so: cannot open shared object file: No such file or directory

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.