Git Product home page Git Product logo

wsdm2023_knowledge_nlp_tutorial's Introduction

WSDM2023_KnowledgeNLP_Tutorial

Materials for WSDM2023 tutorial: Knowledge-Augmented Methods for Natural Language Processing

Time and Location

  1. Time: 8:30am - 12:00pm (GMT+8), February 27, 2023.

  2. Location: Level 2, Empress 2 in Carlton Hotel, Singapore

  3. Live Stream on Zoom: [Join stream]

Tutorial Abstract

Knowledge in NLP has been a rising trend especially after the advent of large scale pre-trained models. NLP models with attention to knowledge can i) access unlimited amount of external information; ii) delegate the task of storing knowledge from its parameter space to knowledge sources; iii) obtain up-to-date information; iv) make prediction results more explainable via selected knowledge. In this tutorial, we will introduce the key steps in integrating knowledge into NLP, including knowledge grounding from text, knowledge representation and fusing. We will also introduce recent state-of-the-art applications in fusing knowledge into language understanding, language generation and commonsense reasoning.

Reference

Our tutorial has been published in WSDM'23 proceedings [Link]

To cite our tutorial, please use the following paper information:

@inproceedings{10.1145/3539597.3572720,
author = {Zhu, Chenguang and Xu, Yichong and Ren, Xiang and Lin, Bill Yuchen and Jiang, Meng and Yu, Wenhao},
title = {Knowledge-Augmented Methods for Natural Language Processing},
year = {2023},
isbn = {9781450394079},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3539597.3572720},
doi = {10.1145/3539597.3572720},
abstract = {Knowledge in NLP has been a rising trend especially after the advent of large-scale pre-trained models. Knowledge is critical to equip statistics-based models with common sense, logic and other external information. In this tutorial, we will introduce recent state-of-the-art works in applying knowledge in language understanding, language generation and commonsense reasoning.},
booktitle = {Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining},
pages = {1228–1231},
numpages = {4},
keywords = {language generation, knowledge-augmented methods, natural language understanding, commonsense reasoning},
location = {Singapore, Singapore},
series = {WSDM '23}
}

Tutorial Materials

1. Slides [Introduction] [KnowledgeForNLU] [KnowledgeForNLG] [KnowledgeForCommonsense] [Conclusion]

2. Video Available after the tutorial

3. Survey:

  • A Survey of Knowledge-enhanced Text Generation, in ACM Computing Survey (CUSR) 2022. [PDF]

  • Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing, in ACM Computing Survey (CUSR) 2023. [pdf]

4. Reading list:

  • KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning, in EMNLP 2019. [pdf]

  • Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-Trained Language Models, in EMNLP 2020. [pdf]

  • Differentiable Open-Ended Commonsense Reasoning, in NAACL 2021. [pdf]

  • CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning, in EMNLP 2021. [pdf]

  • Generate rather than Retrieve: Large Language Models are Strong Context Generators, in ICLR 2023. [pdf]

  • Retrieval Augmentation for Commonsense Reasoning: A Unified Approach, in EMNLP 2022. [pdf]

  • A Unified Encoder-Decoder Framework with Entity Memory, in EMNLP 2022. [pdf]

  • Grape: Knowledge Graph Enhanced Passage Reader for Open-domain Question Answering, in EMNLP 2022. [pdf]

  • KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering, in ACL 2022. [pdf]

  • Dict-BERT: Enhancing Language Model Pre-training with Dictionary, in ACL 2022. [pdf]

  • Fusing Context Into Knowledge Graph for Commonsense Question Answering, in ACL 2021. [pdf]

  • Retrieval Enhanced Model for Commonsense Generation, in ACL 2021. [pdf]

  • Diversifying Content Generation for Commonsense Reasoning with Mixture of Knowledge Graph Experts, in ACL 2022. [pdf]

  • JAKET: Joint Pre-training of Knowledge Graph and Language Understanding, in AAAI 2022. [pdf]

  • Boosting Factual Correctness of Abstractive Summarization with Knowledge Graph, in NAACL 2021. [pdf]

  • REPLUG: Retrieval-Augmented Black-Box Language Models, on ArXiv 2023. [pdf]

Tutorial schedule (Tentative)

Local time (GMT+8) Content Presenter Slides
08:30-08:45 Motivation and Introduction of Knowledge in NLP Chenguang Zhu [Slides]
08:45-09:35 Knowledge in Natural Language Understanding Yichong Xu [Slides]
09:35-10:00 Commonsense Knowledge and Reasoning for NLP Yuchen Lin & Xiang Ren [Slides]
10:00-10:30 Coffee Break - -
10:30-10:55 Commonsense Knowledge and Reasoning for NLP Yuchen Lin & Xiang Ren (cont.)
10:55-11:45 Knowledge in Natural Language Generation Wenhao Yu [Slides]
11:45-12:00 Summary and Future Direction Meng Jiang [Slides]

Presenters

           

Chenguang Zhu       Yichong Xu           Xiang Ren           Yuchen Lin         Meng Jiang         Wenhao Yu

Chenguang Zhu is a Principal Research Manager in Microsoft Cognitive Services Research Group, where he leads the Knowledge & Language Team. His research in NLP covers knowledge graph, text summarization and task-oriented dialogue. He has led teams to achieve first places in multiple NLP competitions, including CommonsenseQA, CommonGen, FEVER, CoQA, ARC and SQuAD v1.0. He holds a Ph.D. degree in Computer Science from Stanford University. Dr. Zhu has given talks at Stanford University, Carnegie Mellon University and University of Notre Dame. He has previously been TA for Coursera online class “Automata”, giving teaching sessions to 100K international students.

Yichong Xu is a Senior Researcher in Knowledge & Language Team in Microsoft Cognitive Services Research Group. His research in NLP focuses on using external knowledge to help natural language processing, including question answering, commonsense reasoning, and text summarization. Dr. Xu received his Ph.D. in Machine Learning from Carnegie Mellon University. During his time at CMU, he has been TA for large classes (>200 students) on machine learning and convex optimization. Dr. Xu has given talks at CMU AI Seminar, as well as in many international conferences including ACL, NAACL, NeurIPS, ICML, etc.

Xiang Ren is an assistant professor at the USC Computer Science Department, a Research Team Leader at USC ISI, and the PI of the Intelligence and Knowledge Discovery (INK) Lab at USC. Priorly, he received his Ph.D. in Computer Science from the University of Illinois Urbana-Champaign. Dr. Ren works on knowledge acquisition and reasoning in natural language processing, with focuses on developing human-centered and label-efficient computational methods for building trustworthy NLP systems. Ren publishes over 100 research papers and delivered over 10 tutorials at the top conferences in natural language process, data mining, and artificial intelligence. He received NSF CAREER Award, The Web Conference Best Paper runner-up, ACM SIGKDD Doctoral Dissertation Award, and several research awards from Google, Amazon, JP Morgan, Sony, and Snapchat. He was named Forbes' Asia 30 Under 30 in 2019.

Bill Yuchen Lin is a Postdoctoral Young Investigator at Allen Institute for AI (AI2), advised by Prof. Yejin Choi. He received his PhD from University of Southern California in 2022, advised by Prof. Xiang Ren. His research goal is to teach machines to think, talk, and act with commonsense knowledge and commonsense reasoning ability as humans do. Towards this ultimate goal, he has been developing knowledge-augmented reasoning methods (e.g., KagNet, MHGRN, DrFact) and constructing benchmark datasets (e.g., CommonGen, RiddleSense, X-CSR) that require commonsense knowledge and complex reasoning for both NLU and NLG. He initiated an online compendium of commonsense reasoning research, which serves as a portal for the community.

Meng Jiang is currently an assistant professor at the Department of Computer Science and Engineering in the University of Notre Dame. He obtained his B.E. and Ph.D. from Tsinghua University. He spent two years in UIUC as a postdoc and joined ND in 2017. His research interests include data mining, machine learning, and natural language processing. He has published more than 100 peer-reviewed papers of these topics. He is the recipient of Notre Dame International Faculty Research Award. The honors and awards he received include best paper finalist in KDD 2014, best paper award in KDD-DLG workshop 2020, and ACM SIGSOFT Distinguished Paper Award in ICSE 2021. He received NSF CRII award in 2019 and CAREER award in 2022.

Wenhao Yu is a Ph.D. candidate in the Department of Computer Science and Engineering at the University of Notre Dame. His research lies in language model + knowledge for solving knowledge-intensive applications, such as open-domain question answering and commonsense reasoning. He has published over 15 conference papers and presented 3 tutorials in machine learning and natural language processing conferences, including ICLR, ICML, ACL and EMNLP. He was the recipient of Bloomberg Ph.D. Fellowship in 2022 and won the best paper award at SoCal NLP in 2022. He was a research intern in Microsoft Research and Allen Institute for AI.

wsdm2023_knowledge_nlp_tutorial's People

Contributors

zcgzcgzcg1 avatar wyu97 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.