Git Product home page Git Product logo

coggpt's Introduction

English | 中文



Code and data for the paper "CogGPT: Unleashing the Power of Cognitive Dynamics on Large Language Models".

CogBench

CogBench is a bilingual benchmark specifically designed to evaluate the cognitive dynamics of Large Language Models (LLMs) in both Chinese and English. CogBench is divided into two parts based on the type of information flow: CogBencha for articles and CogBenchv for short videos.

In this benchmark, both an LLM and a human are assigned the same initial profile and receive identical information flows over 10 iterations. After each iteration, they are required to complete the same cognitive questionnaire. This questionnaire, using a five-point Likert scale, allows participants to express their attitudes towards the current questions.

CogBench aims to assess the cognitive alignment between the LLM and the human. The evaluation metrics include:

  1. Authenticity: Measures the consistency of ratings between the LLM and the human.
  2. Rationality: Assesses the reasoning provided by the LLM.

CogGPT

CogGPT is an LLM-driven agent, designed to showcase the cognitive dynamics of LLMs. Confronted with ever-changing information flows, CogGPT regularly updates its profile and methodically stores preferred knowledge in its long-term memory. This unique capability enables CogGPT to sustain role-specific cognitive dynamics, facilitating lifelong learning.


CogGPT

News

  • 2024.01.17 - Paper released.
  • 2024.01.12 - CogBench released.
  • 2024.01.05 - Project initially released.

User Guide

Setup

Follow these steps to build CogBench:

  1. Clone the Repository: Clone this repository to your local environment.
  2. Switch Directory: Use the cd command to enter the repository directory.
  3. Download Data: Download the CogBench and save it in the dataset directory.
  4. Run Experiments: Implement your method using cogbench_a.json and cogbench_v.json for CogBencha and CogBenchv, respectively, and record your experimental results.
  5. Evaluate Results: Fill in the eval_cogbench_a.json and eval_cogbench_v.json files with your experimental results for evaluations.

Using CogGPT

  1. Declare environment variables to use the GPT-4 API:
export OPENAI_API_KEY=sk-xxxxx
  1. Run CogGPT with default settings:
python coggpt/agent.py

Evaluation

To evaluate your method based on the authenticity and rationality metrics, we recommend running the following commands:

python evaluation.py --file_path <YOUR_FILE_PATH> --method <YOUR_METHOD_NAME> --authenticity --rationality

For example, to evaluate the CoT method on CogBenchv, run:

python evaluation.py --file_path dataset/english/eval_cogbench_v.json --method CoT --authenticity --rationality

The evaluation scores will be displayed as follows:

======= CoT Authenticity =======
Average authenticity: 0.15277666156947955
5th iteration authenticity: 0.3023255813953488
10th iteration authenticity: 0.13135593220338992
======= CoT Rationality =======
Average rationality: 3.058333333333333
5th iteration rationality: 3.7666666666666666
10th iteration rationality: 3.0833333333333335

Please refer to CogBench for more details.

Citation

@misc{lv2024coggpt,
      title={CogGPT: Unleashing the Power of Cognitive Dynamics on Large Language Models}, 
      author={Yaojia Lv and Haojie Pan and Ruiji Fu and Ming Liu and Zhongyuan Wang and Bing Qin},
      year={2024},
      eprint={2401.08438},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

coggpt's People

Contributors

lavonne-hub avatar scarletpan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

coggpt's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.