Git Product home page Git Product logo

docker-faster-whisper's Issues

[BUG] faster-whisper:gpu-version-1.0.1 runs out of memory after ~ 1h

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Hello all,

I'm running faster-whisper this way:
docker run -d --gpus all --runtime=nvidia --name=faster-whisper --privileged=true -e WHISPER_BEAM=10 -e WHISPER_LANG=fr -e WHISPER_MODEL=medium-int8 -e NVIDIA_DRIVER_CAPABILITIES=all -e NVIDIA_VISIBLE_DEVICES=all -p 10300:10300/tcp -v /mnt/docker/data/faster-whisper/:/config:rw ghcr.io/linuxserver/lspipepr-faster-whisper:gpu-version-1.0.1

Hardware:

OS: GNU Linux Debian 12.5 (kernel: 6.6.13+bpo-amd64)
GPU: nVidia TU106M [GeForce RTX 2060 Mobile] (6GB memory)
CPU: AMD Ryzen 7 4800H with Radeon Graphics (8 cores / 1,4GHz)
Host memory: 32GB
Storage: SSD NVME 1TB

It works fine and the container PID is affected to the GPU (nvidia-smi)

But after ~1h of inactivity I get the following Out Of Memory error: https://pastebin.com/raw/c3s4wYAm
If I check nvidia-smi; I still see the container PID using ~1,3GB of memory (so: quite less than the 6GB available on the GPU)

Is there someone that could be so kind to point me to some fix ?
When searching, I spotted this config: max_split_size_mb but I don't know if it could help and I reallydon't know how to apply it
Am I using a too big model for my GPU ? Or should I reduce beams number ?

Thank you very much for your help
Best regards

Expected Behavior

faster-whisper stays available on long term; without OOM error

NB: when problem occurs; nvidia-smi shows the container PID using 1,3GB of GPU memory (so there is still ~4.7GB available memory)

Steps To Reproduce

  1. in this environment & 2. with this config:
    OS: GNU Linux Debian 12.5 (kernel: 6.6.13+bpo-amd64)
    GPU: nVidia TU106M [GeForce RTX 2060 Mobile] (6GB memory)
    CPU: AMD Ryzen 7 4800H with Radeon Graphics (8 cores / 1,4GHz)
    Host memory: 32GB
    Storage: SSD NVME 1TB
    Docker version: 20.10.24+dfsg1
  2. RUN docker run -d --gpus all --runtime=nvidia --name=faster-whisper --privileged=true -e WHISPER_BEAM=10 -e WHISPER_LANG=fr -e WHISPER_MODEL=medium-int8 -e NVIDIA_DRIVER_CAPABILITIES=all -e NVIDIA_VISIBLE_DEVICES=all -p 10300:10300/tcp -v /mnt/docker/data/faster-whisper/:/config:rw ghcr.io/linuxserver/lspipepr-faster-whisper:gpu-version-1.0.1
  3. Wait ~1h and notice "exception=RuntimeError('CUDA failed with error out of memory')>" in container's logs

Environment

- OS:GNU Linux Debian 12.5 (kernel: 6.6.13+bpo-amd64)
- How docker service was installed:apt update + apt install docker.io + nvidia-ctk runtime configure --runtime=docker

CPU architecture

x86-64

Docker creation

docker run -d --gpus all --runtime=nvidia --name=faster-whisper --privileged=true -e WHISPER_BEAM=10 -e WHISPER_LANG=fr -e WHISPER_MODEL=medium-int8 -e NVIDIA_DRIVER_CAPABILITIES=all -e NVIDIA_VISIBLE_DEVICES=all -p 10300:10300/tcp -v /mnt/docker/data/faster-whisper/:/config:rw ghcr.io/linuxserver/lspipepr-faster-whisper:gpu-version-1.0.1

Container logs

[custom-init] No custom files found, skipping...
INFO:__main__:Ready
[ls.io-init] done.
INFO:wyoming_faster_whisper.handler: Allume la cuisine.
INFO:wyoming_faster_whisper.handler: Éteins la cuisine !
ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='Task-14' coro=<AsyncEventHandler.run() done, defined at /lsiopy/lib/python3.10/site-packages/wyoming/server.py:28> exception=RuntimeError('CUDA failed with error out of memory')>
Traceback (most recent call last):
  File "/lsiopy/lib/python3.10/site-packages/wyoming/server.py", line 35, in run
    if not (await self.handle_event(event)):
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/handler.py", line 75, in handle_event
    text = " ".join(segment.text for segment in segments)
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/handler.py", line 75, in <genexpr>
    text = " ".join(segment.text for segment in segments)
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/faster_whisper/transcribe.py", line 162, in generate_segments
    for start, end, tokens in tokenized_segments:
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/faster_whisper/transcribe.py", line 186, in generate_tokenized_segments
    result, temperature = self.generate_with_fallback(segment, prompt, options)
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/faster_whisper/transcribe.py", line 279, in generate_with_fallback
    result = self.model.generate(
RuntimeError: CUDA failed with error out of memory

[BUG] medium.en-int8 invalid choice for model

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

seems to be an additional variable

Expected Behavior

No response

Steps To Reproduce

select medium.en.int8

Environment

- OS: Unraid

CPU architecture

x86-64

Docker creation

via Unraid

Container logs

__main__.py: error: argument --model: invalid choice: 'medium.en-int8' (choose from 'tiny', 'tiny-int8', 'base', 'base-int8', 'small', 'small-int8', 'medium', 'medium-int8')
usage: __main__.py [-h] --model
                   {tiny,tiny-int8,base,base-int8,small,small-int8,medium,medium-int8}
                   --uri URI --data-dir DATA_DIR [--download-dir DOWNLOAD_DIR]
                   [--device DEVICE] [--language LANGUAGE]
                   [--compute-type COMPUTE_TYPE] [--beam-size BEAM_SIZE]
                   [--debug]
__main__.py: error: argument --model: invalid choice: 'medium.en-int8' (choose from 'tiny', 'tiny-int8', 'base', 'base-int8', 'small', 'small-int8', 'medium', 'medium-int8')

[BUG] cannot use alternative faster-whisper models

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

When I add my own faster-whisper ct2 model and corresponding ENV name, I get error message that only tiny, base, small and medium int8 models can be used.

After renaming my model and ENV to medium-int8 and restarting container I get this error:
WARNING:wyoming_faster_whisper.download:Model hashes do not match

Moreover, container starts a download and erased my model without a permission:
INFO:main:Downloading FasterWhisperModel.MEDIUM_INT8 to /config

Expected Behavior

  1. I expect a possibility to use any compatible CT2 model of my choice.
  2. Erasing user's data without a notice is a bad practice in my opinion.

Steps To Reproduce

Copy your custom model to config dir.
Start a container.

Environment

- OS: Docker on Debian11 VM on Proxmox7.4
- How docker service was installed:

CPU architecture

x86-64

Docker creation

docker run -d \
 --name=whisper \
 -p 10300:10300 \
 -v /data/docker/whisper:/config \
 -e PUID=1000 \
 -e PGID=1000 \
 -e TZ=Europe/Moscow \
 -e WHISPER_MODEL=medium-int8 \
 -e WHISPER_BEAM=1 \
 -e WHISPER_LANG=ru \
 -e NVIDIA_DRIVER_CAPABILITIES=all \
 --gpus all \
 --runtime nvidia \
 --restart unless-stopped \
 lscr.io/linuxserver/faster-whisper:gpu

Container logs

[migrations] started
[migrations] no migrations found
───────────────────────────────────────
      ██╗     ███████╗██╗ ██████╗
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝
   Brought to you by linuxserver.io
───────────────────────────────────────
To support LSIO projects visit:
https://www.linuxserver.io/donate/
───────────────────────────────────────
GID/UID
───────────────────────────────────────
User UID:    1000
User GID:    1000
───────────────────────────────────────
[custom-init] No custom files found, skipping...
WARNING:wyoming_faster_whisper.download:Model hashes do not match
WARNING:wyoming_faster_whisper.download:Expected: {'config.json': 'e5a2f85afc17f73960204cad2b002633', 'model.bin': '99b6aca05c475cbdcc182db2b2aed363', 'vocabulary.txt': 'c1120a13c94a8cbb132489655cdd1854'}
WARNING:wyoming_faster_whisper.download:Got: {'model.bin': '1068d7a50dd6889462a99f17c10a409a', 'config.json': '344ea4eb5681c254bce0fbd6f7afe352', 'vocabulary.txt': ''}
INFO:__main__:Downloading FasterWhisperModel.MEDIUM_INT8 to /config

[BUG] [GPU] Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I'm using the lscr.io/linuxserver/faster-whisper:gpu and I'm encountering issues where any Wyoming prompt results in the following error:

Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory

It appears to be related to this behavior in faster-whisper
SYSTRAN/faster-whisper#516

Expected Behavior

faster-whisper is able to use the GPU to parse speech to text

Steps To Reproduce

Setup the faster-whisper docker container per below
Added faster-whisper to Home Assistant using the Wyoming protocol
Setup a Raspberry PI 3+ with wyoming-satellite per https://github.com/rhasspy/wyoming-satellite/blob/master/docs/tutorial_installer.md
Prompts are responded (local wyoming-wakeword.service) to but in the logs on the docker container indicate an error

Logs for docker container lscr.io/linuxserver/faster-whisper:gpu

INFO:faster_whisper:Processing audio with duration 00:15.000
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory

Logs for wyoming-satellite.service

run[807]: WARNING:root:Event(type='error', data={'text': 'speech-to-text failed', 'code': 'stt-stream-failed'}, payload=None)

Environment

- OS: Centos Stream 8 using the kernel-ml module
Linux 6.5.6-1.el8.elrepo.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Oct  6 17:10:59 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux
- How docker service was installed:
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum erase podman buildah
yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose
wget 'http://international.download.nvidia.com/XFree86/Linux-x86_64/550.67/NVIDIA-Linux-x86_64-550.67.run'
chmod +x NVIDIA-Linux-x86_64-550.67.run
./NVIDIA-Linux-x86_64-550.67.run
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo |   sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
yum install -y nvidia-container-toolkit
nvidia-ctk runtime configure --runtime=docker


nvidia-smi
Sat Apr 13 21:29:22 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.67                 Driver Version: 550.67         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1080        Off |   00000000:05:00.0 Off |                  N/A |
|  0%   31C    P8              8W /  180W |    6487MiB /   8192MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A   1731864      C   /app/.venv/bin/python                        3244MiB |
|    0   N/A  N/A   1737066      C   python3                                      3240MiB |
+-----------------------------------------------------------------------------------------+


### CPU architecture

x86-64

### Docker creation

```bash
version: '3.8'
services:
  faster-whisper:
    image: lscr.io/linuxserver/faster-whisper:gpu
    container_name: faster-whisper
    restart: always
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    environment:
      - PUID=<REDACTED>
      - PGID=<REDACTED>
      - TZ=<REDACTED>
      - WHISPER_MODEL=medium
      - WHISPER_BEAM=1 #optional
      - WHISPER_LANG=en #optional
    volumes:
      - /path/to/docker/whisper/config/:/config
    ports:
      - 10300:10300
    runtime: nvidia
    networks:
      swag_default:


### Container logs

```bash
[custom-init] No custom files found, skipping...
[2024-04-13 21:18:16.336] [ctranslate2] [thread 153] [warning] The compute type inferred from the saved model is float16, but the target device or backend do not support efficient float16 computation. The model weights have been automatically converted to use the float32 compute type instead.
INFO:__main__:Ready
[ls.io-init] done.
INFO:faster_whisper:Processing audio with duration 00:15.000
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
[2024-04-13 21:21:04.052] [ctranslate2] [thread 223] [warning] The compute type inferred from the saved model is float16, but the target device or backend do not support efficient float16 computation. The model weights have been automatically converted to use the float32 compute type instead.
INFO:__main__:Ready

how to invoke faster-whisper in this docker image

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I am just trying to figure out how to invoke whisper after running this Docker image.

"whisper-ctranslate2" doesn't seem to exist in the docker.

So do I run the Docker image and then, from the command line ("docker exec -it faster-whisper /bin/bash"), pass some Python programs to it (there seem to be mostly Python packages in the Docker image)?

Sorry, this is a pretty dumb question.

Expected Behavior

No response

Steps To Reproduce

its an unraid server, installed the docker, specified the small model. gave it time to download the model (i checked - its there).

Environment

- OS:
- How docker service was installed:
unraid docker image

CPU architecture

x86-64

Docker creation

docker run
  -d
  --name='faster-whisper'
  --net='bridge'
  -e TZ="Australia/Sydney"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="towerold"
  -e HOST_CONTAINERNAME="faster-whisper"
  -e 'WHISPER_MODEL'='small'
  -e 'WHISPER_BEAM'='1'
  -e 'WHISPER_LANG'='en'
  -e 'PUID'='99'
  -e 'PGID'='100'
  -e 'UMASK'='022'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/linuxserver-ls-logo.png'
  -p '10300:10300/tcp'
  -v '/mnt/user/appdata/faster-whisper':'/config':'rw' 'lscr.io/linuxserver/faster-whisper'
13c397248869fe86f84a65fb80cb262a1d0ad0805d1b09e300375de5c97caa1b

The command finished successfully!

Container logs

text  error  warn  system  array  login  

WARNING:wyoming_faster_whisper.download:Model hashes do not match
WARNING:wyoming_faster_whisper.download:Expected: {'config.json': 'e5a2f85afc17f73960204cad2b002633', 'model.bin': '8b2c0a5013899c255e1f16edc237123b', 'vocabulary.txt': 'c1120a13c94a8cbb132489655cdd1854'}
WARNING:wyoming_faster_whisper.download:Got: {'model.bin': '', 'config.json': '', 'vocabulary.txt': ''}
INFO:__main__:Downloading FasterWhisperModel.SMALL to /config
INFO:__main__:Ready
[migrations] started
[migrations] no migrations found
───────────────────────────────────────

      ██╗     ███████╗██╗ ██████╗ 
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝ 

   Brought to you by linuxserver.io
───────────────────────────────────────

To support LSIO projects visit:
https://www.linuxserver.io/donate/

───────────────────────────────────────
GID/UID
───────────────────────────────────────

User UID:    99
User GID:    100
───────────────────────────────────────

[custom-init] No custom files found, skipping...
[ls.io-init] done.

faster-whisper Unraid docker not working with HA container

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I do run Home Assistant as a docker container under Unraid.
I installed the 3 Wyoming dockers under Unraid also
(Faster whisper, piper and openwakeword).
I'm able to integrate the 3 dockers in HA.
I'm also able to install the dockers in the assist setup.
But The dockers are not able to listen )Faster-whisper and Openwakeup) and piper is speaking strange words.
The entity in HA is showing unknown
image

Expected Behavior

I expecting possibility to listen to speech

Steps To Reproduce

Just install all Wyoming Dockers.
Integrate them in Home Assistant.
setup assist
try to communicate with a 5Stack Atom echo to HA

Environment

- OS: Unraid
- How docker service was installed: as dockers under Unraid

CPU architecture

x86-64

Docker creation

docker run
  -d
  --name='faster-whisper'
  --net='ewhnet'
  -e TZ="Europe/Berlin"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="Tower2"
  -e HOST_CONTAINERNAME="faster-whisper"
  -e 'WHISPER_MODEL'='base-int8'
  -e 'WHISPER_BEAM'='1'
  -e 'WHISPER_LANG'='en'
  -e 'PUID'='99'
  -e 'PGID'='100'
  -e 'UMASK'='022'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/linuxserver-ls-logo.png'
  -p '10300:10300/tcp'
  -v '/mnt/user/appdata/faster-whisper':'/config':'rw' 'lscr.io/linuxserver/faster-whisper'
be01025fb600d7b7a5ef470a4fbea453027126e46b1244fa033899feb46ef89e

Container logs

NFO:__main__:Ready
[migrations] started
[migrations] no migrations found
───────────────────────────────────────

      ██╗     ███████╗██╗ ██████╗
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝

   Brought to you by linuxserver.io
───────────────────────────────────────

To support LSIO projects visit:
https://www.linuxserver.io/donate/

───────────────────────────────────────
GID/UID
───────────────────────────────────────

User UID:    99
User GID:    100
───────────────────────────────────────

[custom-init] No custom files found, skipping...
[ls.io-init] done.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.