Git Product home page Git Product logo

huixiangdou's Introduction

InternLM

👋 join us on Discord and WeChat

Introduction

InternLM2.5 series are released with the following features:

  • Outstanding reasoning capability: State-of-the-art performance on Math reasoning, surpassing models like Llama3 and Gemma2-9B.

  • 1M Context window: Nearly perfect at finding needles in the haystack with 1M-long context, with leading performance on long-context tasks like LongBench. Try it with LMDeploy for 1M-context inference. More details and a file chat demo are found here.

  • Stronger tool use: InternLM2.5 supports gathering information from more than 100 web pages, corresponding implementation will be released in Lagent soon. InternLM2.5 has better tool utilization-related capabilities in instruction following, tool selection and reflection. See examples.

News

[2024.07.19] We release the InternLM2-Reward series of reward models in 1.8B, 7B and 20B sizes. See model zoo below for download or model cards for more details.

[2024.07.03] We release InternLM2.5-7B, InternLM2.5-7B-Chat and InternLM2.5-7B-Chat-1M. See model zoo below for download or model cards for more details.

[2024.03.26] We release InternLM2 technical report. See arXiv for details.

[2024.01.31] We release InternLM2-1.8B, along with the associated chat model. They provide a cheaper deployment option while maintaining leading performance.

[2024.01.23] We release InternLM2-Math-7B and InternLM2-Math-20B with pretraining and SFT checkpoints. They surpass ChatGPT with small sizes. See InternLM-Math for details and download.

[2024.01.17] We release InternLM2-7B and InternLM2-20B and their corresponding chat models with stronger capabilities in all dimensions. See model zoo below for download or model cards for more details.

[2023.12.13] InternLM-7B-Chat and InternLM-20B-Chat checkpoints are updated. With an improved finetuning strategy, the new chat models can generate higher quality responses with greater stylistic diversity.

[2023.09.20] InternLM-20B is released with base and chat versions.

Model Zoo

InternLM2.5

Model Transformers(HF) ModelScope(HF) OpenXLab(HF) OpenXLab(Origin) Release Date
InternLM2.5-7B 🤗internlm2_5-7b internlm2_5-7b Open in OpenXLab Open in OpenXLab 2024-07-03
InternLM2.5-7B-Chat 🤗internlm2_5-7b-chat internlm2_5-7b-chat Open in OpenXLab Open in OpenXLab 2024-07-03
InternLM2.5-7B-Chat-1M 🤗internlm2_5-7b-chat-1m internlm2_5-7b-chat-1m Open in OpenXLab Open in OpenXLab 2024-07-03

Notes:

The release of InternLM2.5 series contains 7B model size for now and we are going to release the 1.8B and 20B versions soon. 7B models are efficient for research and application and 20B models are more powerful and can support more complex scenarios. The relation of these models are shown as follows.

  1. InternLM2.5: Foundation models pre-trained on large-scale corpus. InternLM2.5 models are recommended for consideration in most applications.
  2. InternLM2.5-Chat: The Chat model that undergoes supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), based on the InternLM2.5 model. InternLM2.5-Chat is optimized for instruction following, chat experience, and function call, which is recommended for downstream applications.
  3. InternLM2.5-Chat-1M: InternLM2.5-Chat-1M supports 1M long-context with compatible performance as InternLM2.5-Chat.

Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.

Supplements: HF refers to the format used by HuggingFace in transformers, whereas Origin denotes the format adopted by the InternLM team in InternEvo.

InternLM2-Reward

InternLM2-Reward is a series of reward models, trained on 2.4 million preference samples, available in 1.8B, 7B, and 20B sizes. These model were applied to the PPO training process of our chat models. See model cards for more details.

Model RewardBench Score Transformers(HF) ModelScope(HF) OpenXLab(HF) Release Date
InternLM2-1.8B-Reward 80.6 🤗internlm2-1_8b-reward internlm2-1_8b-reward Open in OpenXLab 2024-07-19
InternLM2-7B-Reward 86.6 🤗internlm2-7b-reward internlm2-7b-reward Open in OpenXLab 2024-07-19
InternLM2-20B-Reward 89.5 🤗internlm2-20b-reward internlm2-20b-reward Open in OpenXLab 2024-07-19

InternLM2

(click to expand)

Our previous generation models with advanced capabilities in long-context processing, reasoning, and coding. See model cards for more details.

Model Transformers(HF) ModelScope(HF) OpenXLab(HF) OpenXLab(Origin) Release Date
InternLM2-1.8B 🤗internlm2-1.8b internlm2-1.8b Open in OpenXLab Open in OpenXLab 2024-01-31
InternLM2-Chat-1.8B-SFT 🤗internlm2-chat-1.8b-sft internlm2-chat-1.8b-sft Open in OpenXLab Open in OpenXLab 2024-01-31
InternLM2-Chat-1.8B 🤗internlm2-chat-1.8b internlm2-chat-1.8b Open in OpenXLab Open in OpenXLab 2024-02-19
InternLM2-Base-7B 🤗internlm2-base-7b internlm2-base-7b Open in OpenXLab Open in OpenXLab 2024-01-17
InternLM2-7B 🤗internlm2-7b internlm2-7b Open in OpenXLab Open in OpenXLab 2024-01-17
InternLM2-Chat-7B-SFT 🤗internlm2-chat-7b-sft internlm2-chat-7b-sft Open in OpenXLab Open in OpenXLab 2024-01-17
InternLM2-Chat-7B 🤗internlm2-chat-7b internlm2-chat-7b Open in OpenXLab Open in OpenXLab 2024-01-17
InternLM2-Base-20B 🤗internlm2-base-20b internlm2-base-20b Open in OpenXLab Open in OpenXLab 2024-01-17
InternLM2-20B 🤗internlm2-20b internlm2-20b Open in OpenXLab Open in OpenXLab 2024-01-17
InternLM2-Chat-20B-SFT 🤗internlm2-chat-20b-sft internlm2-chat-20b-sft Open in OpenXLab Open in OpenXLab 2024-01-17
InternLM2-Chat-20B 🤗internlm2-chat-20b internlm2-chat-20b Open in OpenXLab Open in OpenXLab 2024-01-17

Performance

We have evaluated InternLM2.5 on several important benchmarks using the open-source evaluation tool OpenCompass. Some of the evaluation results are shown in the table below. You are welcome to visit the OpenCompass Leaderboard for more evaluation results.

Base Model

Benchmark InternLM2.5-7B Llama3-8B Yi-1.5-9B
MMLU (5-shot) 71.6 66.4 71.6
CMMLU (5-shot) 79.1 51.0 74.1
BBH (3-shot) 70.1 59.7 71.1
MATH (4-shot) 34.0 16.4 31.9
GSM8K (4-shot) 74.8 54.3 74.5
GPQA (0-shot) 31.3 31.3 27.8

Chat Model

Benchmark InternLM2.5-7B-Chat Llama3-8B-Instruct Gemma2-9B-IT Yi-1.5-9B-Chat GLM-4-9B-Chat Qwen2-7B-Instruct
MMLU (5-shot) 72.8 68.4 70.9 71.0 71.4 70.8
CMMLU (5-shot) 78.0 53.3 60.3 74.5 74.5 80.9
BBH (3-shot CoT) 71.6 54.4 68.2* 69.6 69.6 65.0
MATH (0-shot CoT) 60.1 27.9 46.9 51.1 51.1 48.6
GSM8K (0-shot CoT) 86.0 72.9 88.9 80.1 85.3 82.9
GPQA (0-shot) 38.4 26.1 33.8 37.9 36.9 38.4
  • We use ppl for the MCQ evaluation on base model.
  • The evaluation results were obtained from OpenCompass , and evaluation configuration can be found in the configuration files provided by OpenCompass.
  • The evaluation data may have numerical differences due to the version iteration of OpenCompass, so please refer to the latest evaluation results of OpenCompass.
  • * means the result is copied from the original paper.

Requirements

  • Python >= 3.8
  • PyTorch >= 1.12.0 (2.0.0 and above are recommended)
  • Transformers >= 4.38

Usages

InternLM supports a diverse range of well-known upstream and downstream projects, such as LLaMA-Factory, vLLM, llama.cpp, and more. This support enables a broad spectrum of users to utilize the InternLM series models more efficiently and conveniently. Tutorials for selected ecosystem projects are available here for your convenience.

In the following chapters, we will focus on the usages with Transformers, ModelScope, and Web demos. The chat models adopt chatml format to support both chat and agent applications. To ensure a better usage effect, please make sure that the installed transformers library version meets the following requirements before performing inference with Transformers or ModelScope:

transformers >= 4.38

Import from Transformers

To load the InternLM2.5-7B-Chat model using Transformers, use the following code:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2_5-7b-chat", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained("internlm/internlm2_5-7b-chat", device_map="auto", trust_remote_code=True, torch_dtype=torch.float16)
# (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
  # InternLM 7B in 4bit will cost nearly 8GB GPU memory.
  # pip install -U bitsandbytes
  # 8-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_8bit=True)
  # 4-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_4bit=True)
model = model.eval()
response, history = model.chat(tokenizer, "hello", history=[])
print(response)
# Output: Hello? How can I help you today?
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
print(response)

Import from ModelScope

To load the InternLM2.5-7B-Chat model using ModelScope, use the following code:

import torch
from modelscope import snapshot_download, AutoTokenizer, AutoModelForCausalLM
model_dir = snapshot_download('Shanghai_AI_Laboratory/internlm2_5-7b-chat')
tokenizer = AutoTokenizer.from_pretrained(model_dir, device_map="auto", trust_remote_code=True)
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, torch_dtype=torch.float16)
# (Optional) If on low resource devices, you can load model in 4-bit or 8-bit to further save GPU memory via bitsandbytes.
  # InternLM 7B in 4bit will cost nearly 8GB GPU memory.
  # pip install -U bitsandbytes
  # 8-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_8bit=True)
  # 4-bit: model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto", trust_remote_code=True, load_in_4bit=True)
model = model.eval()
response, history = model.chat(tokenizer, "hello", history=[])
print(response)
response, history = model.chat(tokenizer, "please provide three suggestions about time management", history=history)
print(response)

Dialogue

You can interact with the InternLM Chat 7B model through a frontend interface by running the following code:

pip install streamlit
pip install transformers>=4.38
streamlit run ./chat/web_demo.py

Deployment by LMDeploy

We use LMDeploy for fast deployment of InternLM.

Inference

With only 4 lines of codes, you can perform internlm2_5-7b-chat inference after pip install lmdeploy.

from lmdeploy import pipeline
pipe = pipeline("internlm/internlm2_5-7b-chat")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)

To reduce the memory footprint, we offers 4-bit quantized model internlm2_5-7b-chat-4bit, with which the inference can be conducted as follows:

from lmdeploy import pipeline
pipe = pipeline("internlm/internlm2_5-7b-chat-4bit")
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)

Moreover, you can independently activate the 8bit/4bit KV cache feature:

from lmdeploy import pipeline, TurbomindEngineConfig
pipe = pipeline("internlm/internlm2_5-7b-chat-4bit",
                backend_config=TurbomindEngineConfig(quant_policy=8))
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)

Please refer to the guidance for more usages about model deployment. For additional deployment tutorials, feel free to explore here.

1M-long-context Inference

By enabling the Dynamic NTK feature of LMDeploy, you can acquire the long-context inference power.

Note: 1M context length requires 4xA100-80G.

from lmdeploy import pipeline, GenerationConfig, TurbomindEngineConfig

backend_config = TurbomindEngineConfig(
        rope_scaling_factor=2.5,
        session_len=1048576,  # 1M context length
        max_batch_size=1,
        cache_max_entry_count=0.7,
        tp=4)  # 4xA100-80G.
pipe = pipeline('internlm/internlm2_5-7b-chat-1m', backend_config=backend_config)
prompt = 'Use a long prompt to replace this sentence'
response = pipe(prompt)
print(response)

Agent

InternLM2.5-Chat models have excellent tool utilization capabilities and can work with function calls in a zero-shot manner. It also supports to conduct analysis by collecting information from more than 100 web pages. See more examples in agent section.

Fine-tuning

Please refer to finetune docs for fine-tuning with InternLM.

Note: We have migrated the whole training functionality in this project to InternEvo for easier user experience, which provides efficient pre-training and fine-tuning infra for training InternLM.

Evaluation

We utilize OpenCompass for model evaluation. In InternLM2.5, we primarily focus on standard objective evaluation, long-context evaluation (needle in a haystack), data contamination assessment, agent evaluation, and subjective evaluation.

Objective Evaluation

To evaluate the InternLM model, please follow the guidelines in the OpenCompass tutorial. Typically, we use ppl for multiple-choice questions on the Base model and gen for all questions on the Chat model.

Long-Context Evaluation (Needle in a Haystack)

For the Needle in a Haystack evaluation, refer to the tutorial provided in the documentation. Feel free to try it out.

Data Contamination Assessment

To learn more about data contamination assessment, please check the contamination eval.

Agent Evaluation

  • To evaluate tool utilization, please refer to T-Eval.
  • For code interpreter evaluation, use the Math Agent Evaluation provided in the repository.

Subjective Evaluation

  • Please follow the tutorial for subjective evaluation.

Contribution

We appreciate all the contributors for their efforts to improve and enhance InternLM. Community users are highly encouraged to participate in the project. Please refer to the contribution guidelines for instructions on how to contribute to the project.

License

The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected].

Citation

@misc{cai2024internlm2,
      title={InternLM2 Technical Report},
      author={Zheng Cai and Maosong Cao and Haojiong Chen and Kai Chen and Keyu Chen and Xin Chen and Xun Chen and Zehui Chen and Zhi Chen and Pei Chu and Xiaoyi Dong and Haodong Duan and Qi Fan and Zhaoye Fei and Yang Gao and Jiaye Ge and Chenya Gu and Yuzhe Gu and Tao Gui and Aijia Guo and Qipeng Guo and Conghui He and Yingfan Hu and Ting Huang and Tao Jiang and Penglong Jiao and Zhenjiang Jin and Zhikai Lei and Jiaxing Li and Jingwen Li and Linyang Li and Shuaibin Li and Wei Li and Yining Li and Hongwei Liu and Jiangning Liu and Jiawei Hong and Kaiwen Liu and Kuikun Liu and Xiaoran Liu and Chengqi Lv and Haijun Lv and Kai Lv and Li Ma and Runyuan Ma and Zerun Ma and Wenchang Ning and Linke Ouyang and Jiantao Qiu and Yuan Qu and Fukai Shang and Yunfan Shao and Demin Song and Zifan Song and Zhihao Sui and Peng Sun and Yu Sun and Huanze Tang and Bin Wang and Guoteng Wang and Jiaqi Wang and Jiayu Wang and Rui Wang and Yudong Wang and Ziyi Wang and Xingjian Wei and Qizhen Weng and Fan Wu and Yingtong Xiong and Chao Xu and Ruiliang Xu and Hang Yan and Yirong Yan and Xiaogui Yang and Haochen Ye and Huaiyuan Ying and Jia Yu and Jing Yu and Yuhang Zang and Chuyu Zhang and Li Zhang and Pan Zhang and Peng Zhang and Ruijie Zhang and Shuo Zhang and Songyang Zhang and Wenjian Zhang and Wenwei Zhang and Xingcheng Zhang and Xinyue Zhang and Hui Zhao and Qian Zhao and Xiaomeng Zhao and Fengzhe Zhou and Zaida Zhou and Jingming Zhuo and Yicheng Zou and Xipeng Qiu and Yu Qiao and Dahua Lin},
      year={2024},
      eprint={2403.17297},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

huixiangdou's People

Contributors

boshallen avatar chg0901 avatar echo-minn avatar eltociear avatar fly2tomato avatar jiayinglii avatar jimmyma99 avatar lanbitou-tech avatar lmhh avatar lzmingyueshanpao avatar modas-li avatar sanbuphy avatar tpoisonooo avatar weedge avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

huixiangdou's Issues

请问是否支持多显卡?

我想考虑配一台机器来跑一下,不过出于预算原因,考虑用两张 nvidia 芯片的显卡,单张显存20G左右。

我的疑问是:是否通常这些大模型项目(以及咱这个项目)都能够比较好的支持利用多显卡的显存?

是否有一些硬件条件具备的朋友曾经尝试过,盼望您能给个过来人的建议?

非Hybrid-LLM环境,使用deepseek远程模型运行web-demo报错

error log | 日志或报错信息 | ログ

7

context | 编译/运行环境 | バックグラウンド

Intern Studio 30% A100 with built-in Huixiangdou environment

how to reproduce | 复现步骤 | 再現方法

  1. 按照实战营教程 配置config.ini
  2. 运行python3 -m tests.test_query_gradio
  3. 打开http://127.0.0.1:7860 输入“茴香豆如何在微信群部署?”
  4. 在终端看到报错信息

more | 其他 | その他

  1. 在上面第2步加了--standalone 也无济于事
  2. Hybrid-LLM环境下可以正常运行,不过似乎没有调用deepseek API

关于本地部署时公网 IP 和 redis 的一些问题

detail | 详细描述 | 詳細な説明

env.sh 中既有 REDIS_HOST+REDIS_PORT 又有 ENDPOINT:redis 是运行在本地(例如 ubuntu)上吗,还是阿里云之类的云服务器?“然后将 7860 端口通过公网 IP 代理出去”,这里代理出去是指,数据从 7860(server_port)端口流向公网 IP,然后又流向 REDIS SERVER?可是配置里没有提到公网服务器是如何配置的

huixiangdou.service.worker中single_judge函数报错

detail | 详细描述 | 詳細な説明

您好,我在跑python3 -m huixiangdou.main --standalone时,碰到如下报错:

image

说明:

  1. 本地服务器部署
  2. 模型:bce-embedding-base_v1, bce-reranker-base_v1 以及 internlm2-chat-7b均下载自hf-mirror.com
  3. 在开发机上根据教程跑程序,没有问题
  4. 除了第一个问题报错外,针对第二个问题:“茴香豆怎么部署到微信群”,输出结果为“ErrorCode.UNRELATED”,不知道问题可能出在哪里。(开发机上不存在这个问题)

希望能收到回复,万分感谢!

祝好!

config.ini 参数小问题

detail | 详细描述 | 詳細な説明

config.ini 中的 reject_throttle 这个参数是什么意思呢?

[Docs] A100算力加持!书生大模型实战营第3期全面升级,趣味闯关模式等你开启

A100算力加持!书生大模型实战营第3期全面升级,趣味闯关模式等你开启

image
闯关手册:https://aicarrier.feishu.cn/wiki/XBO6wpQcSibO1okrChhcBkQjnsf
Repo: https://github.com/InternLM/Tutorial

书生·浦语社区于 2023 年年底正式推出了书生·浦语大模型实战营系列活动,半年来累计已有 15 万人次参与学习,并孵化出了超 600 个生态项目,在社区中收到广泛的好评。
为了给大家带来更好的学习体验,书生·浦语大模型实战营正式升级为书生大模型实战营,将逐步加入更多书生大模型体系课程与实战。与此同时,我们将开启全新的升级打怪闯关学习模式,让学习既富挑战性又充满乐趣。

image

亮点一. 前沿优质的课程内容,升级打怪的闯关学习模式

image

前沿优质的课程内容,涵盖了大模型的入门、基础和进阶知识,让学员真正可以从无基础或少量基础学习大模型全套开发、使用流程。
开启全新的升级打怪闯关学习模式,我们将为每位学员定制一张专属的学习闯关地图,见证你成长的每一步。
每完成一节课程的学习,我们将在地图上为你点亮一个学习关卡。你不仅向目标更进一步,还将赢取额外的算力奖励。与此同时,助教、讲师将全程陪伴大家,学习闯关的路上不孤单!

亮点二. 免费且充足的 A100 强力助学

本期实战营将在简单易用的 InternStudio 算力平台上进行授课。我们将提供充足的 A100 算力,帮助学员们解除后顾之忧。学员们可以自由探索大模型的部署、推理、微调和评测全流程,开发出有影响力的大模型项目。

亮点三. 丰富的算力及技术支持,打造出圈的大模型项目

我们非常鼓励学员开发独一无二的大模型项目。在人均大佬的结营群里,学员们可以轻松结识到志同道合的小伙伴,共同探索前行。
不仅如此,我们还将提供丰富的算力和技术支持,并邀请往期的明星项目团队分享他们的宝贵经验,助力学员打造出圈的项目,并在更高级别的赛事中脱颖而出。
在浦源大模型挑战赛(春季赛)中,20 个获奖项目中有 8 个来自实战营。其中,《基于 InternLM2 大模型的离线具身智能导盲犬》成为当届赛事冠军项目,获得了广泛的行业影响力。
近期备受瞩目的《销冠卖货主播大模型》同样孵化于实战营,并在浦源大模型挑战赛(夏季赛)中荣获创意赛道冠军,进一步证明了实战营项目的实力与影响力。
image

亮点四. 使用 InternLM2.5 进行实战授课

2024 年 7 月 3 日,InternLM2.5 对外发布。作为书生·浦语系列大语言模型的最新版本,InternLM2.5 具备一百万词元(Token)的超长文本窗口及开源模型中领先的推理能力,并支持自主规划和在线信息整合,成为助力复杂问题高效解决的得力 AI 助手。本期实战营将使用最新的 InternLM2.5 进行授课,手把手教大家微调、部署、评测 InternLM 2.5。

亮点五. 官方结营证书,帮助你获得更大竞争力

满足结营条件的学员将会获得结营证书,表现优异的学员,还将获得优秀学员称号以及精美证书。官方证书加持,将进一步提升大家在工作和学业中的竞争力。

image

还等什么,赶紧来加入我们吧,带你从入门到进阶,大模型时代不迷航!

image

https://www.wjx.cn/vm/PvefmG2.aspx?sojumpparm=github

1. 关卡

image

2. 活动组织方

image

3. 书生共学计划

在大模型技术的浪潮中,面对混杂的众多信息,如何获取有效、可信的学习资源成为了一项挑战。
为此,我们推出“书生共学计划”,鼓励大家将实战营活动分享给你身边有需要的小伙伴,让每一位热爱技术的朋友都能在这个复杂的信息环境中找到自己的航向,帮助他们在大模型的学习之路上少走弯路。

参与方法

  • 启航准备:通过填写问卷报名加入实战营,开启你的学习之旅。
  • 专属海报:访问书生共学计划活动页面(https://colearn.intern-ai.org.cn/ ),输入手机号,定制你独一无二的分享海报。
  • 招募同行者:将海报分享给你身边的小伙伴,邀请他们报名实战营,共享知识的力量。

image

独家奖励等你拿

  • 每邀请 1 位同学填写报名问卷即可获得 50 算力点。
  • 成功邀请 3 人,解锁 InternStudio 平台 24GB A100 及 80GB 存储使用权限。
  • 成功邀请 6 人,解锁 InternStudio 平台 40GB A100 及 120GB 存储使用权限。
  • 成功邀请 16 人,解锁 InternStudio 平台 80GB A100 及 200GB 存储使用权限。

展现你的影响力,成为知识的使者
这不仅是一个促进个人学习和成长的机遇,更是一个帮助他人、为自己赢得认可和资源的舞台。通过你的分享,我们可以一起帮助更多的人接触和了解前沿技术,期待你的加入。

关于本地部署的一些问题

尊敬的茴香豆开发人员您好,我在学习研究项目时有以下问题仍然有点困惑请求解答:

  1. web search超出免费限额的费用标准? 是否可以关闭web search功能?
  2. 本地部署支持的llm种类以及服务框架(fastchat?vllm?lmdeploy?)?是否支持自己微调后的模型?
  3. 本地单卡部署茴香豆,llm为finetune后internlm2-chat-20b,推荐显卡配置?

想请教一下如何检索excel文件

我尝试把excel转成json格式放进向量数据库,但是检索效果很差。放进prompt里又太长了。想问一下怎么实现对excel文件的问答

多模态RAG相关问题

detail | 详细描述 | 詳細な説明

最近学习了一些RAG相关的基础知识,想请教一下,如果需要做多模态的RAG有没有什么推荐的流程?

现在我的思路是这样的,搭建检索框架的话应该用langchain, faiss就可以,但是不是很清楚应该使用哪些模型来做特征提取?比如如果是用CLIP做特征提取的话,应该只能得到一个非常笼统的特征,如果需要很细粒度的特征的话,是不是可以把多模态大模型的视觉token存下来,或者用imagebind这种多模态的模型?

请问有什么推荐的思路吗?感谢。

使用remote方式运行tests.test_query_gradio报下面的错误

Traceback (most recent call last):
File "/root/.conda/envs/InternLM2_Huixiangdou/lib/python3.10/site-packages/aiohttp/web_protocol.py", line 452, in _handle_request
resp = await request_handler(request)
File "/root/.conda/envs/InternLM2_Huixiangdou/lib/python3.10/site-packages/aiohttp/web_app.py", line 543, in _handle
resp = await handler(request)
File "/root/huixiangdou/huixiangdou/service/llm_server_hybrid.py", line 574, in inference
text = server.generate_response(prompt=prompt,
File "/root/huixiangdou/huixiangdou/service/llm_server_hybrid.py", line 487, in generate_response
output_text = self.inference.chat(prompt, history)
AttributeError: 'HybridLLMServer' object has no attribute 'inference'

关于触发网络搜索条件的讨论

detail | 详细描述 | 詳細な説明

我在阅读源码时发现,如果本地知识库检索不到匹配文档,则会直接返回UNRELATED,不会进行后续的网络搜索,相反如果检索到了匹配文档则会进行网络搜索,为什么要这样设计呢?我个人认为触发网络搜索的条件应该是:检索不到文档,再进行搜索,检索到了文档就不进行搜索。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.