Git Product home page Git Product logo

lm-reasoning's People

Contributors

ber666 avatar haluptzok avatar huybery avatar jeffhj avatar shuyanzhou avatar siviltaram avatar zxlzr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lm-reasoning's Issues

In case you were planning to expand on inductive reasoning

Thanks for synthesizing such a fast growing list of papers on LLMs and reasoning! I also appreciate you writing about reasoning types that go beyond deductive!

I have a few papers that touch on inductive reasoning in humans and models in case you'd like to expand on that topic in the survey, though disclaimer: these only deal with what you consider to be small LMs (though they are model agnostic).

Misra, 2022 (AAAI Doctoral Consortium 2022): On Semantic Cognition, Inductive Generalization, and Language Models
https://ojs.aaai.org/index.php/AAAI/article/view/21584

Misra et al., 2022 (CogSci 2022): A Property Induction Framework For Neural Language Models:
https://arxiv.org/abs/2205.06910

Misra et al., 2021 (CogSci 2021): Do language models learn typicality judgments from text? (exp 2 is the first analysis of LMs on Inductive Reasoning)
https://arxiv.org/abs/2105.02987

Other papers that should be included in case you do decide to pursue this route:

Han et al., 2022 (CogSci 2022): Human-like property induction is a challenge for large language models
https://psyarxiv.com/6mkjy/

Yang et al., 2022: Language Models as Inductive Reasoners
https://arxiv.org/abs/2212.10923

A request for updating new papers on logical reasoning data augmentation, prompt augmentation and evaluation

Hi Jie,

Here is our new papers for logical reasoning data augmentation, prompt augmentation and evaluation. Please consider to add those papers into your arXiv paper. Thanks a lot.

Logic-Driven Data Augmentation and Prompt Augmentation

We present an AMR-based logic-driven data augmentation for contrastive learning to improve discriminative language model's logical reasoning performance and we also use AMR-based data augmentation method to augment the prompt which help GPT-4 achieved #1 on the ReClor leaderboard (One of the hardest logical reasoning reading comprehension dataset, the data was collected from LSAT and GMAT) and we also achieved better performance than other baseline models on different logical reasoning reading comprehension tasks and natural language inference tasks. Here is the details for the paper.

Our paper (Qiming Bao, Alex Yuxuan Peng, Zhenyun Deng, Wanjun Zhong, Neset Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock, Jiamou Liu)
"Enhancing Logical Reasoning of Large Language Models through Logic-Driven Data Augmentation" [Paper link] [Source code] [Model weights] [Leaderboard].

Out-of-Distribution Logical Reasoning Evaluation and Prompt Augmentation for Enhancing OOD Logical Reasoning

We present a systematically out-of-distribution evaluation on logical reasoning tasks. We presented three new more robust logical reasoning datasets ReClor-Plus, LogiQA-Plus and LogiQAv2-Plus which are basically constructed from ReClor, LogiQA and LogiQAv2 from the changes of option's order and forms. We found simply using chain-of-thought prompting will not increase models' performance on the out-of-distribution scenario while using our AMR-based logic-driven data augmentation to augment prompt can increase large language models' performance on out-of-distribution logical reasoning tasks. The three datasets have been collected by OpenAI/Evals.
"A Systematic Evaluation of Large Language Models on Out-of-Distribution Logical Reasoning Tasks" [Paper link] [Source code] [Dataset links].

A Empirical Study on Out-Of-Distribution Multi-Step Logical Reasoning

We find that pre-trained language models are not good at on robust multi-step logical reasoning tasks and one of the main reason is that there is limited amount of training sets for deeper multi-step logical reasoning. Therefore, we present a deeper large multi-step logical reasoning datasets named PARARULE-Plus. The dataset has also been collected by OpenAI/Evals.
"Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation" [Paper link] [Source code] [Dataset links].

Request to add a new survey

Hi, thanks for your contributions to collating large language model reasoning papers!
Recently, we release a reasoning survey on natural language reasoning mainly from another perspective: the reasoning paradigm (end-to-end, forward, and backward).

Here are our survey and repository:
Nature Language Reasoning, A Survey
https://arxiv.org/pdf/2303.14725.pdf
https://github.com/FreedomIntelligence/ReasoningNLP

I believe our surveys and repositories can complementarily help people better understand the reasoning!

Request to add a paper.

Great Work!

Could you please add our paper:

Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks
In this paper, we propose a novel framework to combine the reasoning of LLM with search engine.
paper link

Request to add paper

Hi,
great work/repo!

Please consider adding our work on deductive/logical reasoning.

FaiRR: Faithful and Robust Deductive Reasoning over Natural Language, ACL 2022 (arxived on: 19 Mar 2022)
paper link
Soumya Sanyal, Harman Singh, Xiang Ren

Paper addition request

Hi, thanks for the great work, I wanted to point to this paper about using LM to perform reasoning in knowledge graphs for the explainable recommendation task: Faithful Path Language Modelling for Explainable Recommendation over Knowledge Graph
https://arxiv.org/abs/2310.16452

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.