Git Product home page Git Product logo

soulteary / docker-prompt-generator Goto Github PK

View Code? Open in Web Editor NEW
1.1K 20.0 111.0 1.84 MB

Using a Model to generate prompts for Model applications. / 使用模型来生成作图咒语的偷懒工具,支持 MidJourney、Stable Diffusion 等。

Home Page: https://soulteary.com/2023/04/05/eighty-lines-of-code-to-implement-the-open-source-midjourney-and-stable-diffusion-spell-drawing-tool.html

License: MIT License

Python 100.00%
midjourney midjourney-app prompt prompt-tool stable-diffusion stable-diffusion-webui

docker-prompt-generator's Introduction

Docker Prompt Generator

中文文档

Using a Model to generate prompts for Model applications (MidJourney, Stable Diffusion, etc...)

Preview

Like the official Mid Journey function, it supports parsing prompts from images and secondary extensions based on prompts.

Support writing prompts directly in Chinese, and then get prompts text that can be used for better effect generation.

Usage

In the past several articles, I mentioned my personal habits and recommended development environment, which is based on Docker and Nvidia's official base container for deep learning environments. I won't go into details about that here, but if you're interested, you can check out articles like this one on getting started with Docker-based deep learning environments. I believe that long-time readers should already be quite familiar with it.

Of course, since this article includes parts that can be played with just a CPU, you can also refer to Playing with the Stable Diffusion Model on MacBook Devices with M1 and M2 chips from a few months ago to configure your environment.

Once you have prepared the Docker environment configuration, we can continue to have fun.

Find a suitable directory and use git clone or download the Zip archive to get the "Docker Prompt Generator" project code onto your local machine.

git clone https://github.com/soulteary/docker-prompt-generator.git
# or
curl -sL -o docker-prompt-generator.zip https://github.com/soulteary/docker-prompt-generator/archive/refs/heads/main.zip

Next, enter the project directory and use Nvidia's official PyTorch Docker base image to build the basic environment. Compared to pulling a pre-made image directly from DockerHub, building it yourself will save a lot of time.

Execute the following commands in the project directory to complete the model application build:

# Build the base image
docker build -t soulteary/prompt-generator:base . -f docker/Dockerfile.base

# Build the CPU application
docker build -t soulteary/prompt-generator:cpu . -f docker/Dockerfile.cpu

# Build the GPU application
docker build -t soulteary/prompt-generator:gpu . -f docker/Dockerfile.gpu

Then, depending on your hardware environment, selectively execute the following commands to start a model application with a Web UI interface.

# Run the CPU image
docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --rm -it -p 7860:7860 soulteary/prompt-generator:cpu

# Run the GPU image
docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --rm -it -p 7860:7860 soulteary/prompt-generator:gpu

Enter the IP address of the host running the container in your browser, and you can start using the tool.

Credits

Models:

Datasets:

docker-prompt-generator's People

Contributors

soulteary avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-prompt-generator's Issues

ubantu,GPU模式,容器啓動時報錯

ubantu 系統,GPU模式
Loaded CLIP model and data in 3531.72 seconds.
Traceback (most recent call last):
File "webui.py", line 84, in
block.queue(max_size=64).launch(show_api=False, enable_queue=True, debug=True, share=False, server_name='0.0.0.0')
TypeError: launch() got an unexpected keyword argument 'enable_queue'

从文本中生成时报错

错误信息
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 88, but max_length is set to 79. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 88, but max_length is set to 74. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 88, but max_length is set to 88. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 88, but max_length is set to 78. This can lead to unexpected behavior. You should consider increasing max_new_tokens.
Setting pad_token_id to eos_token_id:50256 for open-end generation.
Input length of input_ids is 88, but max_length is set to 81. This can lead to unexpected behavior. You should consider increasing max_new_tokens.

请问这个需要设置什么参数吗?

ERROR: failed to solve authorization failed

ERROR: failed to solve: soulteary/prompt-generator:base: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed

PS E:\AIProject\docker-prompt-generator> docker build -t soulteary/prompt-generator:gpu . -f docker/Dockerfile.gpu
[+] Building 3.3s (4/4) FINISHED                                                                         docker:default
 => [internal] load .dockerignore                                                                                  0.0s
 => => transferring context: 2B                                                                                    0.0s
 => [internal] load build definition from Dockerfile.gpu                                                           0.0s
 => => transferring dockerfile: 879B                                                                               0.0s
 => ERROR [internal] load metadata for docker.io/soulteary/prompt-generator:base                                   3.3s
 => [auth] soulteary/prompt-generator:pull token for registry-1.docker.io                                          0.0s
------
 > [internal] load metadata for docker.io/soulteary/prompt-generator:base:
------
Dockerfile.gpu:1
--------------------
   1 | >>> FROM soulteary/prompt-generator:base
   2 |     LABEL org.opencontainers.image.authors="[email protected]"
   3 |
--------------------
ERROR: failed to solve: soulteary/prompt-generator:base: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed

构建失败

运行 GPU 镜像时出错

D:\Desk\docker-prompt-generator-main> docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --rm -it -p 7860:7860 soulteary/prompt-generator:gpu
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.

老是这个问题,gradio更新或安装2.4.5也没用

Traceback (most recent call last):
File "webui.py", line 84, in
block.queue(max_size=64).launch(show_api=False, enable_queue=True, debug=True, share=False, server_name='0.0.0.0')
TypeError: launch() got an unexpected keyword argument 'enable_queue'
I0413 11:44:48.399149 140205898127104 _client.py:1026] HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
I0413 11:44:49.024987 140205831014144 _client.py:1026] HTTP Request: GET https://checkip.amazonaws.com/ "HTTP/1.1 200 "
I0413 11:44:50.318053 140205831014144 _client.py:1026] HTTP Request: POST https://api.gradio.app/gradio-initiated-analytics/ "HTTP/1.1 200 OK"

docker build都会报错,手动能解决,但是你的get-models.py文件内容是什么呢?

…/docker-prompt-generator    main     3.9.12 (anaconda3) 
╰ docker build -t soulteary/prompt-generator:gpu . -f docker/Dockerfile.gpu
[+] Building 0.1s (2/2) FINISHED
=> [internal] load build definition from Dockerfile.gpu 0.0s
=> => transferring dockerfile: 41B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
failed to solve with frontend dockerfile.v0: failed to create LLB definition: dockerfile parse error line 8: FROM requires either one or three arguments

image

docker build image failed

build base image时报错,最小内存需要多大?

ImportError: /usr/local/lib/python3.8/dist-packages/sklearn/__check_build/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block

完整的stack:

=> [internal] load build definition from Dockerfile.base 0.0s
=> => transferring dockerfile: 696B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for nvcr.io/nvidia/pytorch:22.12-py3 1.9s
=> [1/5] FROM nvcr.io/nvidia/pytorch:22.12-py3@sha256:09a80f272dd173c9d8 0.0s
=> CACHED [2/5] RUN pip config set global.index-url https://pypi.tuna.ts 0.0s
=> CACHED [3/5] WORKDIR /app 0.0s
=> CACHED [4/5] RUN echo -e 'from transformers import AutoTokenizer, Aut 0.0s
=> ERROR [5/5] RUN python /get-models.py && rm -rf /get-models.py 2.2s


[5/5] RUN python /get-models.py && rm -rf /get-models.py:
#8 1.924 Traceback (most recent call last):
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/sklearn/__check_build/init.py", line 44, in
#8 1.924 from ._check_build import check_build # noqa
#8 1.924 ImportError: /usr/local/lib/python3.8/dist-packages/sklearn/__check_build/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block
#8 1.924
#8 1.924 During handling of the above exception, another exception occurred:
#8 1.924
#8 1.924 Traceback (most recent call last):
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 1126, in _get_module
#8 1.924 return importlib.import_module("." + module_name, self.name)
#8 1.924 File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module
#8 1.924 return _bootstrap._gcd_import(name[level:], package, level)
#8 1.924 File "", line 1014, in _gcd_import
#8 1.924 File "", line 991, in _find_and_load
#8 1.924 File "", line 975, in _find_and_load_unlocked
#8 1.924 File "", line 671, in _load_unlocked
#8 1.924 File "", line 848, in exec_module
#8 1.924 File "", line 219, in _call_with_frames_removed
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/init.py", line 61, in
#8 1.924 from .document_question_answering import DocumentQuestionAnsweringPipeline
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/document_question_answering.py", line 29, in
#8 1.924 from .question_answering import select_starts_ends
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/question_answering.py", line 8, in
#8 1.924 from ..data import SquadExample, SquadFeatures, squad_convert_examples_to_features
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/transformers/data/init.py", line 26, in
#8 1.924 from .metrics import glue_compute_metrics, xnli_compute_metrics
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/transformers/data/metrics/init.py", line 18, in
#8 1.924 if is_sklearn_available():
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 565, in is_sklearn_available
#8 1.924 return is_scipy_available() and importlib.util.find_spec("sklearn.metrics")
#8 1.924 File "/usr/lib/python3.8/importlib/util.py", line 94, in find_spec
#8 1.924 parent = import(parent_name, fromlist=['path'])
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/sklearn/init.py", line 81, in
#8 1.924 from . import __check_build # noqa: F401
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/sklearn/__check_build/init.py", line 46, in
#8 1.924 raise_build_error(e)
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/sklearn/__check_build/init.py", line 31, in raise_build_error
#8 1.924 raise ImportError("""%s
#8 1.924 ImportError: /usr/local/lib/python3.8/dist-packages/sklearn/__check_build/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block
#8 1.924 ___________________________________________________________________________
#8 1.924 Contents of /usr/local/lib/python3.8/dist-packages/sklearn/_check_build:
#8 1.924 check_build.cpython-38-aarch64-linux-gnu.so__init
.py pycache
#8 1.924 setup.py
#8 1.924 ___________________________________________________________________________
#8 1.924 It seems that scikit-learn has not been built correctly.
#8 1.924
#8 1.924 If you have installed scikit-learn from source, please do not forget
#8 1.924 to build the package before using it: run python setup.py install or
#8 1.924 make in the source directory.
#8 1.924
#8 1.924 If you have used an installer, please check that it is suited for your
#8 1.924 Python version, your operating system and your platform.
#8 1.924
#8 1.924 The above exception was the direct cause of the following exception:
#8 1.924
#8 1.924 Traceback (most recent call last):
#8 1.924 File "/get-models.py", line 1, in
#8 1.924 from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
#8 1.924 File "", line 1039, in _handle_fromlist
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 1116, in getattr
#8 1.924 module = self._get_module(self._class_to_module[name])
#8 1.924 File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 1128, in _get_module
#8 1.924 raise RuntimeError(
#8 1.924 RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
#8 1.924 /usr/local/lib/python3.8/dist-packages/sklearn/__check_build/../../scikit_learn.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block
#8 1.924 ___________________________________________________________________________
#8 1.924 Contents of /usr/local/lib/python3.8/dist-packages/sklearn/_check_build:
#8 1.924 check_build.cpython-38-aarch64-linux-gnu.so__init
.py pycache
#8 1.924 setup.py
#8 1.924 ___________________________________________________________________________
#8 1.924 It seems that scikit-learn has not been built correctly.
#8 1.924
#8 1.924 If you have installed scikit-learn from source, please do not forget
#8 1.924 to build the package before using it: run python setup.py install or
#8 1.924 make in the source directory.
#8 1.924
#8 1.924 If you have used an installer, please check that it is suited for your
#8 1.924 Python version, your operating system and your platform.


executor failed running [/bin/sh -c python /get-models.py && rm -rf /get-models.py]: exit code: 1

构建镜像时报错

docker build -t soulteary/prompt-generator:base . -f docker/Dockerfile.base

Error response from daemon: Dockerfile parse error line 11: FROM requires either one or three arguments

部署报错

[+] Building 0.1s (2/2) FINISHED
=> [internal] load build definition from Dockerfile.base 0.0s
=> => transferring dockerfile: 696B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
failed to solve with frontend dockerfile.v0: failed to create LLB definition: dockerfile parse error line 11: FROM requires either one or three arguments

看了一下其它人的 Issues 说是 docker版本的问题,我尝试过从官网下载最新的版本,但是也无法正常部署,不知道哪个版本可以正常编译?

Cloud integration: v1.0.31
Version: 20.10.24
API version: 1.41
Go version: go1.19.7
Git commit: 297e128
Built: Tue Apr 4 18:21:21 2023
OS/Arch: darwin/arm64
Context: default
Experimental: true

我这边老是报这个错误

pull access denied for soulteary/prompt-generator, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

building base image error

hey, l'm trying to build the base image, but got this error
failed to solve with frontend dockerfile.v0: failed to create LLB definition: dockerfile parse error line 11: FROM requires either one or three arguments
could you help with it pls?
Cheers
Charlotte

PLAN: let build or deploy easier

This repository has accumulated some problems due to Windows or low-level Docker builds.
Although it can be solved, there are always people asking questions, and the experience is not good, both for developers and me.

这个仓库积累了一些因为 Windows 或者低版本 Docker 构建产生的问题。
尽管能够解决,但是总有人提问题,体验并不好,对于开发者和我都是如此。


  • More general platform-wide builds
    • Linux
    • Windows
    • macOS x86
    • macOS ARM
  • pre-build docker imgaes

401 Unauthorized

[1/5] FROM nvcr.io/nvidia/pytorch:22.12-py3@sha256:09a80f272dd173c9d8f28c23a1985aebe2bd3edd41a184ee9634f6e3f8a1f63d:


failed to copy: httpReadSeeker: failed open: failed to authorize: rpc error: code = Unknown desc = failed to fetch anonymous token: unexpected status: 401 Unauthorized

部署好之后执行报错

Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/gradio/routes.py", line 393, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1108, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 915, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "webui.py", line 56, in get_prompt_from_image
prompt = ci.interrogate(image)
File "/usr/local/lib/python3.8/dist-packages/clip_interrogator/clip_interrogator.py", line 257, in interrogate
merged = _merge_tables([self.artists, self.flavors, self.mediums, self.movements, self.trendings], self.config)
File "/usr/local/lib/python3.8/dist-packages/clip_interrogator/clip_interrogator.py", line 429, in _merge_tables
m = LabelTable([], None, None, None, config)
TypeError: init() takes 4 positional arguments but 6 were given

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.