Git Product home page Git Product logo

motionagent's Introduction



PyPI

license open issues GitHub pull-requests GitHub latest commit Leaderboard

Discord

modelscope%2Fmodelscope | Trendshift

English | 中文 | 日本語

Introduction

ModelScope is built upon the notion of “Model-as-a-Service” (MaaS). It seeks to bring together most advanced machine learning models from the AI community, and streamlines the process of leveraging AI models in real-world applications. The core ModelScope library open-sourced in this repository provides the interfaces and implementations that allow developers to perform model inference, training and evaluation.

In particular, with rich layers of API-abstraction, the ModelScope library offers unified experience to explore state-of-the-art models spanning across domains such as CV, NLP, Speech, Multi-Modality, and Scientific-computation. Model contributors of different areas can integrate models into the ModelScope ecosystem through the layered-APIs, allowing easy and unified access to their models. Once integrated, model inference, fine-tuning, and evaluations can be done with only a few lines of codes. In the meantime, flexibilities are also provided so that different components in the model applications can be customized wherever necessary.

Apart from harboring implementations of a wide range of different models, ModelScope library also enables the necessary interactions with ModelScope backend services, particularly with the Model-Hub and Dataset-Hub. Such interactions facilitate management of various entities (models and datasets) to be performed seamlessly under-the-hood, including entity lookup, version control, cache management, and many others.

Models and Online Accessibility

Hundreds of models are made publicly available on ModelScope (700+ and counting), covering the latest development in areas such as NLP, CV, Audio, Multi-modality, and AI for Science, etc. Many of these models represent the SOTA in their specific fields, and made their open-sourced debut on ModelScope. Users can visit ModelScope(modelscope.cn) and experience first-hand how these models perform via online experience, with just a few clicks. Immediate developer-experience is also possible through the ModelScope Notebook, which is backed by ready-to-use CPU/GPU development environment in the cloud - only one click away on ModelScope.



Some representative examples include:

LLM:

Multi-Modal:

CV:

Audio:

AI for Science:

Note: Most models on ModelScope are public and can be downloaded without account registration on modelscope website(www.modelscope.cn), please refer to instructions for model download, for dowloading models with api provided by modelscope library or git.

QuickTour

We provide unified interface for inference using pipeline, fine-tuning and evaluation using Trainer for different tasks.

For any given task with any type of input (image, text, audio, video...), inference pipeline can be implemented with only a few lines of code, which will automatically load the underlying model to get inference result, as is exemplified below:

>>> from modelscope.pipelines import pipeline
>>> word_segmentation = pipeline('word-segmentation',model='damo/nlp_structbert_word-segmentation_chinese-base')
>>> word_segmentation('今天天气不错,适合出去游玩')
{'output': '今天 天气 不错 , 适合 出去 游玩'}

Given an image, portrait matting (aka. background-removal) can be accomplished with the following code snippet:

image

>>> import cv2
>>> from modelscope.pipelines import pipeline

>>> portrait_matting = pipeline('portrait-matting')
>>> result = portrait_matting('https://modelscope.oss-cn-beijing.aliyuncs.com/test/images/image_matting.png')
>>> cv2.imwrite('result.png', result['output_img'])

The output image with the background removed is: image

Fine-tuning and evaluation can also be done with a few more lines of code to set up training dataset and trainer, with the heavy-lifting work of training and evaluation a model encapsulated in the implementation of traner.train() and trainer.evaluate() interfaces.

For example, the gpt3 base model (1.3B) can be fine-tuned with the chinese-poetry dataset, resulting in a model that can be used for chinese-poetry generation.

>>> from modelscope.metainfo import Trainers
>>> from modelscope.msdatasets import MsDataset
>>> from modelscope.trainers import build_trainer

>>> train_dataset = MsDataset.load('chinese-poetry-collection', split='train'). remap_columns({'text1': 'src_txt'})
>>> eval_dataset = MsDataset.load('chinese-poetry-collection', split='test').remap_columns({'text1': 'src_txt'})
>>> max_epochs = 10
>>> tmp_dir = './gpt3_poetry'

>>> kwargs = dict(
     model='damo/nlp_gpt3_text-generation_1.3B',
     train_dataset=train_dataset,
     eval_dataset=eval_dataset,
     max_epochs=max_epochs,
     work_dir=tmp_dir)

>>> trainer = build_trainer(name=Trainers.gpt3_trainer, default_args=kwargs)
>>> trainer.train()

Why should I use ModelScope library

  1. A unified and concise user interface is abstracted for different tasks and different models. Model inferences and training can be implemented by as few as 3 and 10 lines of code, respectively. It is convenient for users to explore models in different fields in the ModelScope community. All models integrated into ModelScope are ready to use, which makes it easy to get started with AI, in both educational and industrial settings.

  2. ModelScope offers a model-centric development and application experience. It streamlines the support for model training, inference, export and deployment, and facilitates users to build their own MLOps based on the ModelScope ecosystem.

  3. For the model inference and training process, a modular design is put in place, and a wealth of functional module implementations are provided, which is convenient for users to customize their own model inference, training and other processes.

  4. For distributed model training, especially for large models, it provides rich training strategy support, including data parallel, model parallel, hybrid parallel and so on.

Installation

Docker

ModelScope Library currently supports popular deep learning framework for model training and inference, including PyTorch, TensorFlow and ONNX. All releases are tested and run on Python 3.7+, Pytorch 1.8+, Tensorflow1.15 or Tensorflow2.0+.

To allow out-of-box usage for all the models on ModelScope, official docker images are provided for all releases. Based on the docker image, developers can skip all environment installation and configuration and use it directly. Currently, the latest version of the CPU image and GPU image can be obtained from:

CPU docker image

# py37
registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-py37-torch1.11.0-tf1.15.5-1.6.1

# py38
registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-py38-torch2.0.1-tf2.13.0-1.9.5

GPU docker image

# py37
registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.3.0-py37-torch1.11.0-tf1.15.5-1.6.1

# py38
registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.8.0-py38-torch2.0.1-tf2.13.0-1.9.5

Setup Local Python Environment

One can also set up local ModelScope environment using pip and conda. ModelScope supports python3.7 and above. We suggest anaconda for creating local python environment:

conda create -n modelscope python=3.8
conda activate modelscope

PyTorch or TensorFlow can be installed separately according to each model's requirements.

  • Install pytorch doc
  • Install tensorflow doc

After installing the necessary machine-learning framework, you can install modelscope library as follows:

If you only want to play around with the modelscope framework, of trying out model/dataset download, you can install the core modelscope components:

pip install modelscope

If you want to use multi-modal models:

pip install modelscope[multi-modal]

If you want to use nlp models:

pip install modelscope[nlp] -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html

If you want to use cv models:

pip install modelscope[cv] -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html

If you want to use audio models:

pip install modelscope[audio] -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html

If you want to use science models:

pip install modelscope[science] -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html

Notes:

  1. Currently, some audio-task models only support python3.7, tensorflow1.15.4 Linux environments. Most other models can be installed and used on Windows and Mac (x86).

  2. Some models in the audio field use the third-party library SoundFile for wav file processing. On the Linux system, users need to manually install libsndfile of SoundFile(doc link). On Windows and MacOS, it will be installed automatically without user operation. For example, on Ubuntu, you can use following commands:

    sudo apt-get update
    sudo apt-get install libsndfile1
  3. Some models in computer vision need mmcv-full, you can refer to mmcv installation guide, a minimal installation is as follows:

    pip uninstall mmcv # if you have installed mmcv, uninstall it
    pip install -U openmim
    mim install mmcv-full

Learn More

We provide additional documentations including:

License

This project is licensed under the Apache License (Version 2.0).

Citation

@Misc{modelscope,
  title = {ModelScope: bring the notion of Model-as-a-Service to life.},
  author = {The ModelScope Team},
  howpublished = {\url{https://github.com/modelscope/modelscope}},
  year = {2023}
}

motionagent's People

Contributors

hehaha68 avatar wangqiang9 avatar wangxingjun778 avatar yingdachen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

motionagent's Issues

qwen/Qwen-7B-Chat下载失败,是什么地方需要登录吗?

生成剧本时,拿不到模型,看报错是需要登录,但是在什么地方登录呢?

2023-09-20 19:59:22,024 - modelscope - ERROR - Authentication token does not exist, failed to access model qwen/Qwen-7B-Chat which may not exist or may be private. Please login first.
Traceback (most recent call last):
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/modelscope/hub/errors.py", line 81, in handle_http_response
response.raise_for_status()
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://www.modelscope.cn/api/v1/models/qwen/Qwen-7B-Chat/revisions?EndTime=1695211162

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/gradio/queueing.py", line 388, in call_prediction
output = await route_utils.call_process_api(
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/gradio/route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/gradio/blocks.py", line 1437, in process_api
result = await self.call_function(
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/gradio/blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/gradio/utils.py", line 650, in wrapper
response = f(*args, **kwargs)
File "app.py", line 36, in qwen_script
script = qwen_infer_p(inputs=inputs)
File "/service/motionagent/inference/qwen_infer.py", line 45, in qwen_infer
tokenizer = AutoTokenizer.from_pretrained("qwen/Qwen-7B-Chat", revision = 'v1.0.1',trust_remote_code=True)
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/modelscope/utils/hf_util.py", line 90, in from_pretrained
model_dir = snapshot_download(
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/modelscope/hub/snapshot_download.py", line 96, in snapshot_download
revision = _api.get_valid_revision(
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/modelscope/hub/api.py", line 479, in get_valid_revision
revisions = self.list_model_revisions(
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/modelscope/hub/api.py", line 433, in list_model_revisions
handle_http_response(r, logger, cookies, model_id)
File "/app/anaconda3/envs/motion_agent/lib/python3.8/site-packages/modelscope/hub/errors.py", line 88, in handle_http_response
raise HTTPError('Response details: %s' % message) from error
requests.exceptions.HTTPError: Response details: {'Code': 10010200001, 'Message': '获取模型版本失败,信息:获取模失败', 'RequestId': 'e10a8569-e157-4607-98be-8409d70e4e1f', 'Success': False}

DSW部署motionagent时,报错。真的很想使用motionagent,我浏览了一下b站有很多人出现类似问题,是否可以解答一下,感激不尽!

问题如下:
步骤:motionagent# pip3 install -r requirements.txt中

报错内容:
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/tmp/pip-install-5wqbzv4h/audiocraft_a3fa9fee14b4408daf5895e606fd4ca3/setup.py", line 19, in
REQUIRED = [i.strip() for i in open("requirements.txt")]
FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

十分感谢开发者能推出该项目,希望以后自己可以成为忠实粉丝,谢谢!

运行python app.py时报错:for field override_dirname is not allowed: use default_factory

base) root@tyn-Venue-11-Pro-7130-vPro:/home/tyn/motionagent# python app.py
2024-06-02 22:13:03,283 - modelscope - INFO - PyTorch version 2.0.1 Found.
2024-06-02 22:13:03,285 - modelscope - INFO - TensorFlow version 2.16.1 Found.
2024-06-02 22:13:03,285 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2024-06-02 22:13:03,312 - modelscope - INFO - Loading done! Current index file version is 1.8.4, with md5 50f197fbf9140414ea7ff54927153e24 and a total number of 902 components indexed
Traceback (most recent call last):
File "/home/tyn/motionagent/app.py", line 8, in
from inference.music_infer import music_infer
File "/home/tyn/motionagent/inference/music_infer.py", line 5, in
from audiocraft.data.audio import audio_write
File "/root/miniconda3/lib/python3.11/site-packages/audiocraft/init.py", line 24, in
from . import data, modules, models
File "/root/miniconda3/lib/python3.11/site-packages/audiocraft/data/init.py", line 10, in
from . import audio, audio_dataset, info_audio_dataset, music_dataset, sound_dataset
File "/root/miniconda3/lib/python3.11/site-packages/audiocraft/data/audio_dataset.py", line 33, in
import dora
File "/root/miniconda3/lib/python3.11/site-packages/dora/init.py", line 68, in
import hydra
File "/root/miniconda3/lib/python3.11/site-packages/hydra/init.py", line 5, in
from hydra import utils
File "/root/miniconda3/lib/python3.11/site-packages/hydra/utils.py", line 10, in
from hydra._internal.utils import (
File "/root/miniconda3/lib/python3.11/site-packages/hydra/_internal/utils.py", line 21, in
from hydra.core.utils import get_valid_filename, split_config_path
File "/root/miniconda3/lib/python3.11/site-packages/hydra/core/utils.py", line 19, in
from hydra.core.hydra_config import HydraConfig
File "/root/miniconda3/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in
from hydra.conf import HydraConf
File "/root/miniconda3/lib/python3.11/site-packages/hydra/conf/init.py", line 62, in
class JobConf:
File "/root/miniconda3/lib/python3.11/site-packages/hydra/conf/init.py", line 87, in JobConf
@DataClass
^^^^^^^^^
File "/root/miniconda3/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/root/miniconda3/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory

求帮忙看下原因!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.