Git Product home page Git Product logo

docker-faster-whisper's Introduction

linuxserver.io

Blog Discord Discourse Fleet GitHub Open Collective

The LinuxServer.io team brings you another container release featuring:

  • regular and timely application updates
  • easy user mappings (PGID, PUID)
  • custom base image with s6 overlay
  • weekly base OS updates with common layers across the entire LinuxServer.io ecosystem to minimise space usage, down time and bandwidth
  • regular security updates

Find us at:

  • Blog - all the things you can do with our containers including How-To guides, opinions and much more!
  • Discord - realtime support / chat with the community and the team.
  • Discourse - post on our community forum.
  • Fleet - an online web interface which displays all of our maintained images.
  • GitHub - view the source for all of our repositories.
  • Open Collective - please consider helping us by either donating or contributing to our budget

Scarf.io pulls GitHub Stars GitHub Release GitHub Package Repository GitLab Container Registry Quay.io Docker Pulls Docker Stars Jenkins Build LSIO CI

Faster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models. This container provides a Wyoming protocol server for faster-whisper.

faster-whisper

Supported Architectures

We utilise the docker manifest for multi-platform awareness. More information is available from docker here and our announcement here.

Simply pulling lscr.io/linuxserver/faster-whisper:latest should retrieve the correct image for your arch, but you can also pull specific arch images via tags.

The architectures supported by this image are:

Architecture Available Tag
x86-64 amd64-<version tag>
arm64
armhf

Version Tags

This image provides various versions that are available via tags. Please read the descriptions carefully and exercise caution when using unstable or development tags.

Tag Available Description
latest Stable releases
gpu Releases with Nvidia GPU support

Application Setup

For use with Home Assistant Assist, add the Wyoming integration and supply the hostname/IP and port that Whisper is running add-on."

When using the gpu tag with Nvidia GPUs, make sure you set the container to use the nvidia runtime and that you have the Nvidia Container Toolkit installed on the host and that you run the container with the correct GPU(s) exposed. See the Nvidia Container Toolkit docs for more details.

For more information see the faster-whisper docs,

Usage

To help you get started creating a container from this image you can either use docker-compose or the docker cli.

docker-compose (recommended, click here for more info)

---
services:
  faster-whisper:
    image: lscr.io/linuxserver/faster-whisper:latest
    container_name: faster-whisper
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - WHISPER_MODEL=tiny-int8
      - WHISPER_BEAM=1 #optional
      - WHISPER_LANG=en #optional
    volumes:
      - /path/to/data:/config
    ports:
      - 10300:10300
    restart: unless-stopped
docker run -d \
  --name=faster-whisper \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -e WHISPER_MODEL=tiny-int8 \
  -e WHISPER_BEAM=1 `#optional` \
  -e WHISPER_LANG=en `#optional` \
  -p 10300:10300 \
  -v /path/to/data:/config \
  --restart unless-stopped \
  lscr.io/linuxserver/faster-whisper:latest

Parameters

Containers are configured using parameters passed at runtime (such as those above). These parameters are separated by a colon and indicate <external>:<internal> respectively. For example, -p 8080:80 would expose port 80 from inside the container to be accessible from the host's IP on port 8080 outside the container.

Parameter Function
-p 10300 Wyoming connection port.
-e PUID=1000 for UserID - see below for explanation
-e PGID=1000 for GroupID - see below for explanation
-e TZ=Etc/UTC specify a timezone to use, see this list.
-e WHISPER_MODEL=tiny-int8 Whisper model that will be used for transcription. From tiny, base, small and medium, all with -int8 compressed variants
-e WHISPER_BEAM=1 Number of candidates to consider simultaneously during transcription.
-e WHISPER_LANG=en Language that you will speak to the add-on.
-v /config Local path for Whisper config files.

Environment variables from files (Docker secrets)

You can set any environment variable from a file by using a special prepend FILE__.

As an example:

-e FILE__MYVAR=/run/secrets/mysecretvariable

Will set the environment variable MYVAR based on the contents of the /run/secrets/mysecretvariable file.

Umask for running applications

For all of our images we provide the ability to override the default umask settings for services started within the containers using the optional -e UMASK=022 setting. Keep in mind umask is not chmod it subtracts from permissions based on it's value it does not add. Please read up here before asking for support.

User / Group Identifiers

When using volumes (-v flags), permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user PUID and group PGID.

Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic.

In this instance PUID=1000 and PGID=1000, to find yours use id your_user as below:

id your_user

Example output:

uid=1000(your_user) gid=1000(your_user) groups=1000(your_user)

Docker Mods

Docker Mods Docker Universal Mods

We publish various Docker Mods to enable additional functionality within the containers. The list of Mods available for this image (if any) as well as universal mods that can be applied to any one of our images can be accessed via the dynamic badges above.

Support Info

  • Shell access whilst the container is running:

    docker exec -it faster-whisper /bin/bash
  • To monitor the logs of the container in realtime:

    docker logs -f faster-whisper
  • Container version number:

    docker inspect -f '{{ index .Config.Labels "build_version" }}' faster-whisper
  • Image version number:

    docker inspect -f '{{ index .Config.Labels "build_version" }}' lscr.io/linuxserver/faster-whisper:latest

Updating Info

Most of our images are static, versioned, and require an image update and container recreation to update the app inside. With some exceptions (noted in the relevant readme.md), we do not recommend or support updating apps inside the container. Please consult the Application Setup section above to see if it is recommended for the image.

Below are the instructions for updating containers:

Via Docker Compose

  • Update images:

    • All images:

      docker-compose pull
    • Single image:

      docker-compose pull faster-whisper
  • Update containers:

    • All containers:

      docker-compose up -d
    • Single container:

      docker-compose up -d faster-whisper
  • You can also remove the old dangling images:

    docker image prune

Via Docker Run

  • Update the image:

    docker pull lscr.io/linuxserver/faster-whisper:latest
  • Stop the running container:

    docker stop faster-whisper
  • Delete the container:

    docker rm faster-whisper
  • Recreate a new container with the same docker run parameters as instructed above (if mapped correctly to a host folder, your /config folder and settings will be preserved)

  • You can also remove the old dangling images:

    docker image prune

Image Update Notifications - Diun (Docker Image Update Notifier)

tip: We recommend Diun for update notifications. Other tools that automatically update containers unattended are not recommended or supported.

Building locally

If you want to make local modifications to these images for development purposes or just to customize the logic:

git clone https://github.com/linuxserver/docker-faster-whisper.git
cd docker-faster-whisper
docker build \
  --no-cache \
  --pull \
  -t lscr.io/linuxserver/faster-whisper:latest .

The ARM variants can be built on x86_64 hardware using multiarch/qemu-user-static

docker run --rm --privileged multiarch/qemu-user-static:register --reset

Once registered you can define the dockerfile to use with -f Dockerfile.aarch64.

Versions

  • 25.11.23: - Initial Release.

docker-faster-whisper's People

Contributors

linuxserver-ci avatar thespad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-faster-whisper's Issues

faster-whisper Unraid docker not working with HA container

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I do run Home Assistant as a docker container under Unraid.
I installed the 3 Wyoming dockers under Unraid also
(Faster whisper, piper and openwakeword).
I'm able to integrate the 3 dockers in HA.
I'm also able to install the dockers in the assist setup.
But The dockers are not able to listen )Faster-whisper and Openwakeup) and piper is speaking strange words.
The entity in HA is showing unknown
image

Expected Behavior

I expecting possibility to listen to speech

Steps To Reproduce

Just install all Wyoming Dockers.
Integrate them in Home Assistant.
setup assist
try to communicate with a 5Stack Atom echo to HA

Environment

- OS: Unraid
- How docker service was installed: as dockers under Unraid

CPU architecture

x86-64

Docker creation

docker run
  -d
  --name='faster-whisper'
  --net='ewhnet'
  -e TZ="Europe/Berlin"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="Tower2"
  -e HOST_CONTAINERNAME="faster-whisper"
  -e 'WHISPER_MODEL'='base-int8'
  -e 'WHISPER_BEAM'='1'
  -e 'WHISPER_LANG'='en'
  -e 'PUID'='99'
  -e 'PGID'='100'
  -e 'UMASK'='022'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/linuxserver-ls-logo.png'
  -p '10300:10300/tcp'
  -v '/mnt/user/appdata/faster-whisper':'/config':'rw' 'lscr.io/linuxserver/faster-whisper'
be01025fb600d7b7a5ef470a4fbea453027126e46b1244fa033899feb46ef89e

Container logs

NFO:__main__:Ready
[migrations] started
[migrations] no migrations found
───────────────────────────────────────

      ██╗     ███████╗██╗ ██████╗
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝

   Brought to you by linuxserver.io
───────────────────────────────────────

To support LSIO projects visit:
https://www.linuxserver.io/donate/

───────────────────────────────────────
GID/UID
───────────────────────────────────────

User UID:    99
User GID:    100
───────────────────────────────────────

[custom-init] No custom files found, skipping...
[ls.io-init] done.

[BUG] faster-whisper:gpu-version-1.0.1 runs out of memory after ~ 1h

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Hello all,

I'm running faster-whisper this way:
docker run -d --gpus all --runtime=nvidia --name=faster-whisper --privileged=true -e WHISPER_BEAM=10 -e WHISPER_LANG=fr -e WHISPER_MODEL=medium-int8 -e NVIDIA_DRIVER_CAPABILITIES=all -e NVIDIA_VISIBLE_DEVICES=all -p 10300:10300/tcp -v /mnt/docker/data/faster-whisper/:/config:rw ghcr.io/linuxserver/lspipepr-faster-whisper:gpu-version-1.0.1

Hardware:

OS: GNU Linux Debian 12.5 (kernel: 6.6.13+bpo-amd64)
GPU: nVidia TU106M [GeForce RTX 2060 Mobile] (6GB memory)
CPU: AMD Ryzen 7 4800H with Radeon Graphics (8 cores / 1,4GHz)
Host memory: 32GB
Storage: SSD NVME 1TB

It works fine and the container PID is affected to the GPU (nvidia-smi)

But after ~1h of inactivity I get the following Out Of Memory error: https://pastebin.com/raw/c3s4wYAm
If I check nvidia-smi; I still see the container PID using ~1,3GB of memory (so: quite less than the 6GB available on the GPU)

Is there someone that could be so kind to point me to some fix ?
When searching, I spotted this config: max_split_size_mb but I don't know if it could help and I reallydon't know how to apply it
Am I using a too big model for my GPU ? Or should I reduce beams number ?

Thank you very much for your help
Best regards

Expected Behavior

faster-whisper stays available on long term; without OOM error

NB: when problem occurs; nvidia-smi shows the container PID using 1,3GB of GPU memory (so there is still ~4.7GB available memory)

Steps To Reproduce

  1. in this environment & 2. with this config:
    OS: GNU Linux Debian 12.5 (kernel: 6.6.13+bpo-amd64)
    GPU: nVidia TU106M [GeForce RTX 2060 Mobile] (6GB memory)
    CPU: AMD Ryzen 7 4800H with Radeon Graphics (8 cores / 1,4GHz)
    Host memory: 32GB
    Storage: SSD NVME 1TB
    Docker version: 20.10.24+dfsg1
  2. RUN docker run -d --gpus all --runtime=nvidia --name=faster-whisper --privileged=true -e WHISPER_BEAM=10 -e WHISPER_LANG=fr -e WHISPER_MODEL=medium-int8 -e NVIDIA_DRIVER_CAPABILITIES=all -e NVIDIA_VISIBLE_DEVICES=all -p 10300:10300/tcp -v /mnt/docker/data/faster-whisper/:/config:rw ghcr.io/linuxserver/lspipepr-faster-whisper:gpu-version-1.0.1
  3. Wait ~1h and notice "exception=RuntimeError('CUDA failed with error out of memory')>" in container's logs

Environment

- OS:GNU Linux Debian 12.5 (kernel: 6.6.13+bpo-amd64)
- How docker service was installed:apt update + apt install docker.io + nvidia-ctk runtime configure --runtime=docker

CPU architecture

x86-64

Docker creation

docker run -d --gpus all --runtime=nvidia --name=faster-whisper --privileged=true -e WHISPER_BEAM=10 -e WHISPER_LANG=fr -e WHISPER_MODEL=medium-int8 -e NVIDIA_DRIVER_CAPABILITIES=all -e NVIDIA_VISIBLE_DEVICES=all -p 10300:10300/tcp -v /mnt/docker/data/faster-whisper/:/config:rw ghcr.io/linuxserver/lspipepr-faster-whisper:gpu-version-1.0.1

Container logs

[custom-init] No custom files found, skipping...
INFO:__main__:Ready
[ls.io-init] done.
INFO:wyoming_faster_whisper.handler: Allume la cuisine.
INFO:wyoming_faster_whisper.handler: Éteins la cuisine !
ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='Task-14' coro=<AsyncEventHandler.run() done, defined at /lsiopy/lib/python3.10/site-packages/wyoming/server.py:28> exception=RuntimeError('CUDA failed with error out of memory')>
Traceback (most recent call last):
  File "/lsiopy/lib/python3.10/site-packages/wyoming/server.py", line 35, in run
    if not (await self.handle_event(event)):
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/handler.py", line 75, in handle_event
    text = " ".join(segment.text for segment in segments)
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/handler.py", line 75, in <genexpr>
    text = " ".join(segment.text for segment in segments)
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/faster_whisper/transcribe.py", line 162, in generate_segments
    for start, end, tokens in tokenized_segments:
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/faster_whisper/transcribe.py", line 186, in generate_tokenized_segments
    result, temperature = self.generate_with_fallback(segment, prompt, options)
  File "/lsiopy/lib/python3.10/site-packages/wyoming_faster_whisper/faster_whisper/transcribe.py", line 279, in generate_with_fallback
    result = self.model.generate(
RuntimeError: CUDA failed with error out of memory

[BUG] [GPU] Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I'm using the lscr.io/linuxserver/faster-whisper:gpu and I'm encountering issues where any Wyoming prompt results in the following error:

Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory

It appears to be related to this behavior in faster-whisper
SYSTRAN/faster-whisper#516

Expected Behavior

faster-whisper is able to use the GPU to parse speech to text

Steps To Reproduce

Setup the faster-whisper docker container per below
Added faster-whisper to Home Assistant using the Wyoming protocol
Setup a Raspberry PI 3+ with wyoming-satellite per https://github.com/rhasspy/wyoming-satellite/blob/master/docs/tutorial_installer.md
Prompts are responded (local wyoming-wakeword.service) to but in the logs on the docker container indicate an error

Logs for docker container lscr.io/linuxserver/faster-whisper:gpu

INFO:faster_whisper:Processing audio with duration 00:15.000
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory

Logs for wyoming-satellite.service

run[807]: WARNING:root:Event(type='error', data={'text': 'speech-to-text failed', 'code': 'stt-stream-failed'}, payload=None)

Environment

- OS: Centos Stream 8 using the kernel-ml module
Linux 6.5.6-1.el8.elrepo.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Oct  6 17:10:59 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux
- How docker service was installed:
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum erase podman buildah
yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose
wget 'http://international.download.nvidia.com/XFree86/Linux-x86_64/550.67/NVIDIA-Linux-x86_64-550.67.run'
chmod +x NVIDIA-Linux-x86_64-550.67.run
./NVIDIA-Linux-x86_64-550.67.run
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo |   sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
yum install -y nvidia-container-toolkit
nvidia-ctk runtime configure --runtime=docker


nvidia-smi
Sat Apr 13 21:29:22 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.67                 Driver Version: 550.67         CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1080        Off |   00000000:05:00.0 Off |                  N/A |
|  0%   31C    P8              8W /  180W |    6487MiB /   8192MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A   1731864      C   /app/.venv/bin/python                        3244MiB |
|    0   N/A  N/A   1737066      C   python3                                      3240MiB |
+-----------------------------------------------------------------------------------------+


### CPU architecture

x86-64

### Docker creation

```bash
version: '3.8'
services:
  faster-whisper:
    image: lscr.io/linuxserver/faster-whisper:gpu
    container_name: faster-whisper
    restart: always
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    environment:
      - PUID=<REDACTED>
      - PGID=<REDACTED>
      - TZ=<REDACTED>
      - WHISPER_MODEL=medium
      - WHISPER_BEAM=1 #optional
      - WHISPER_LANG=en #optional
    volumes:
      - /path/to/docker/whisper/config/:/config
    ports:
      - 10300:10300
    runtime: nvidia
    networks:
      swag_default:


### Container logs

```bash
[custom-init] No custom files found, skipping...
[2024-04-13 21:18:16.336] [ctranslate2] [thread 153] [warning] The compute type inferred from the saved model is float16, but the target device or backend do not support efficient float16 computation. The model weights have been automatically converted to use the float32 compute type instead.
INFO:__main__:Ready
[ls.io-init] done.
INFO:faster_whisper:Processing audio with duration 00:15.000
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
[2024-04-13 21:21:04.052] [ctranslate2] [thread 223] [warning] The compute type inferred from the saved model is float16, but the target device or backend do not support efficient float16 computation. The model weights have been automatically converted to use the float32 compute type instead.
INFO:__main__:Ready

[BUG] cannot use alternative faster-whisper models

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

When I add my own faster-whisper ct2 model and corresponding ENV name, I get error message that only tiny, base, small and medium int8 models can be used.

After renaming my model and ENV to medium-int8 and restarting container I get this error:
WARNING:wyoming_faster_whisper.download:Model hashes do not match

Moreover, container starts a download and erased my model without a permission:
INFO:main:Downloading FasterWhisperModel.MEDIUM_INT8 to /config

Expected Behavior

  1. I expect a possibility to use any compatible CT2 model of my choice.
  2. Erasing user's data without a notice is a bad practice in my opinion.

Steps To Reproduce

Copy your custom model to config dir.
Start a container.

Environment

- OS: Docker on Debian11 VM on Proxmox7.4
- How docker service was installed:

CPU architecture

x86-64

Docker creation

docker run -d \
 --name=whisper \
 -p 10300:10300 \
 -v /data/docker/whisper:/config \
 -e PUID=1000 \
 -e PGID=1000 \
 -e TZ=Europe/Moscow \
 -e WHISPER_MODEL=medium-int8 \
 -e WHISPER_BEAM=1 \
 -e WHISPER_LANG=ru \
 -e NVIDIA_DRIVER_CAPABILITIES=all \
 --gpus all \
 --runtime nvidia \
 --restart unless-stopped \
 lscr.io/linuxserver/faster-whisper:gpu

Container logs

[migrations] started
[migrations] no migrations found
───────────────────────────────────────
      ██╗     ███████╗██╗ ██████╗
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝
   Brought to you by linuxserver.io
───────────────────────────────────────
To support LSIO projects visit:
https://www.linuxserver.io/donate/
───────────────────────────────────────
GID/UID
───────────────────────────────────────
User UID:    1000
User GID:    1000
───────────────────────────────────────
[custom-init] No custom files found, skipping...
WARNING:wyoming_faster_whisper.download:Model hashes do not match
WARNING:wyoming_faster_whisper.download:Expected: {'config.json': 'e5a2f85afc17f73960204cad2b002633', 'model.bin': '99b6aca05c475cbdcc182db2b2aed363', 'vocabulary.txt': 'c1120a13c94a8cbb132489655cdd1854'}
WARNING:wyoming_faster_whisper.download:Got: {'model.bin': '1068d7a50dd6889462a99f17c10a409a', 'config.json': '344ea4eb5681c254bce0fbd6f7afe352', 'vocabulary.txt': ''}
INFO:__main__:Downloading FasterWhisperModel.MEDIUM_INT8 to /config

[BUG] medium.en-int8 invalid choice for model

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

seems to be an additional variable

Expected Behavior

No response

Steps To Reproduce

select medium.en.int8

Environment

- OS: Unraid

CPU architecture

x86-64

Docker creation

via Unraid

Container logs

__main__.py: error: argument --model: invalid choice: 'medium.en-int8' (choose from 'tiny', 'tiny-int8', 'base', 'base-int8', 'small', 'small-int8', 'medium', 'medium-int8')
usage: __main__.py [-h] --model
                   {tiny,tiny-int8,base,base-int8,small,small-int8,medium,medium-int8}
                   --uri URI --data-dir DATA_DIR [--download-dir DOWNLOAD_DIR]
                   [--device DEVICE] [--language LANGUAGE]
                   [--compute-type COMPUTE_TYPE] [--beam-size BEAM_SIZE]
                   [--debug]
__main__.py: error: argument --model: invalid choice: 'medium.en-int8' (choose from 'tiny', 'tiny-int8', 'base', 'base-int8', 'small', 'small-int8', 'medium', 'medium-int8')

how to invoke faster-whisper in this docker image

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I am just trying to figure out how to invoke whisper after running this Docker image.

"whisper-ctranslate2" doesn't seem to exist in the docker.

So do I run the Docker image and then, from the command line ("docker exec -it faster-whisper /bin/bash"), pass some Python programs to it (there seem to be mostly Python packages in the Docker image)?

Sorry, this is a pretty dumb question.

Expected Behavior

No response

Steps To Reproduce

its an unraid server, installed the docker, specified the small model. gave it time to download the model (i checked - its there).

Environment

- OS:
- How docker service was installed:
unraid docker image

CPU architecture

x86-64

Docker creation

docker run
  -d
  --name='faster-whisper'
  --net='bridge'
  -e TZ="Australia/Sydney"
  -e HOST_OS="Unraid"
  -e HOST_HOSTNAME="towerold"
  -e HOST_CONTAINERNAME="faster-whisper"
  -e 'WHISPER_MODEL'='small'
  -e 'WHISPER_BEAM'='1'
  -e 'WHISPER_LANG'='en'
  -e 'PUID'='99'
  -e 'PGID'='100'
  -e 'UMASK'='022'
  -l net.unraid.docker.managed=dockerman
  -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/linuxserver-ls-logo.png'
  -p '10300:10300/tcp'
  -v '/mnt/user/appdata/faster-whisper':'/config':'rw' 'lscr.io/linuxserver/faster-whisper'
13c397248869fe86f84a65fb80cb262a1d0ad0805d1b09e300375de5c97caa1b

The command finished successfully!

Container logs

text  error  warn  system  array  login  

WARNING:wyoming_faster_whisper.download:Model hashes do not match
WARNING:wyoming_faster_whisper.download:Expected: {'config.json': 'e5a2f85afc17f73960204cad2b002633', 'model.bin': '8b2c0a5013899c255e1f16edc237123b', 'vocabulary.txt': 'c1120a13c94a8cbb132489655cdd1854'}
WARNING:wyoming_faster_whisper.download:Got: {'model.bin': '', 'config.json': '', 'vocabulary.txt': ''}
INFO:__main__:Downloading FasterWhisperModel.SMALL to /config
INFO:__main__:Ready
[migrations] started
[migrations] no migrations found
───────────────────────────────────────

      ██╗     ███████╗██╗ ██████╗ 
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝ 

   Brought to you by linuxserver.io
───────────────────────────────────────

To support LSIO projects visit:
https://www.linuxserver.io/donate/

───────────────────────────────────────
GID/UID
───────────────────────────────────────

User UID:    99
User GID:    100
───────────────────────────────────────

[custom-init] No custom files found, skipping...
[ls.io-init] done.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.