Git Product home page Git Product logo

idiap / coqui-ai-tts Goto Github PK

View Code? Open in Web Editor NEW

This project forked from coqui-ai/tts

271.0 10.0 19.0 136.82 MB

๐Ÿธ๐Ÿ’ฌ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production

Home Page: https://coqui-tts.readthedocs.io

License: Mozilla Public License 2.0

Shell 0.13% Python 91.94% Makefile 0.07% HTML 0.26% Jupyter Notebook 7.53% Cython 0.04% Dockerfile 0.03%

coqui-ai-tts's Introduction

๐ŸธCoqui TTS News

  • ๐Ÿ“ฃ Fork of the original, unmaintained repository. New PyPI package: coqui-tts
  • ๐Ÿ“ฃ โ“TTSv2 is here with 16 languages and better performance across the board.
  • ๐Ÿ“ฃ โ“TTS fine-tuning code is out. Check the example recipes.
  • ๐Ÿ“ฃ โ“TTS can now stream with <200ms latency.
  • ๐Ÿ“ฃ โ“TTS, our production TTS model that can speak 13 languages, is released Blog Post, Demo, Docs
  • ๐Ÿ“ฃ ๐ŸถBark is now available for inference with unconstrained voice cloning. Docs
  • ๐Ÿ“ฃ You can use ~1100 Fairseq models with ๐ŸธTTS.
  • ๐Ÿ“ฃ ๐ŸธTTS now supports ๐ŸขTortoise with faster inference. Docs

๐ŸธTTS is a library for advanced Text-to-Speech generation.

๐Ÿš€ Pretrained models in +1100 languages.

๐Ÿ› ๏ธ Tools for training new models and fine-tuning existing models in any language.

๐Ÿ“š Utilities for dataset analysis and curation.


Discord License PyPI version Covenant Downloads DOI

GithubActions GithubActions GithubActions Docs


๐Ÿ’ฌ Where to ask questions

Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.

Type Platforms
๐Ÿšจ Bug Reports GitHub Issue Tracker
๐ŸŽ Feature Requests & Ideas GitHub Issue Tracker
๐Ÿ‘ฉโ€๐Ÿ’ป Usage Questions GitHub Discussions
๐Ÿ—ฏ General Discussion GitHub Discussions or Discord

The issues and discussions in the original repository are also still a useful source of information.

๐Ÿ”— Links and Resources

Type Links
๐Ÿ’ผ Documentation ReadTheDocs
๐Ÿ’พ Installation TTS/README.md
๐Ÿ‘ฉโ€๐Ÿ’ป Contributing CONTRIBUTING.md
๐Ÿ“Œ Road Map Main Development Plans
๐Ÿš€ Released Models Standard models and Fairseq models in ~1100 languages
๐Ÿ“ฐ Papers TTS Papers

Features

  • High-performance Deep Learning models for Text2Speech tasks.
    • Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
    • Speaker Encoder to compute speaker embeddings efficiently.
    • Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
  • Fast and efficient model training.
  • Detailed training logs on the terminal and Tensorboard.
  • Support for Multi-speaker TTS.
  • Efficient, flexible, lightweight but feature complete Trainer API.
  • Released and ready-to-use models.
  • Tools to curate Text2Speech datasets underdataset_analysis.
  • Utilities to use and test your models.
  • Modular (but not too much) code base enabling easy implementation of new ideas.

Model Implementations

Spectrogram models

End-to-End Models

Attention Methods

  • Guided Attention: paper
  • Forward Backward Decoding: paper
  • Graves Attention: paper
  • Double Decoder Consistency: blog
  • Dynamic Convolutional Attention: paper
  • Alignment Network: paper

Speaker Encoder

Vocoders

Voice Conversion

You can also help us implement more models.

Installation

๐ŸธTTS is tested on Ubuntu 22.04 with python >= 3.9, < 3.13..

If you are only interested in synthesizing speech with the released ๐ŸธTTS models, installing from PyPI is the easiest option.

pip install coqui-tts

If you plan to code or train models, clone ๐ŸธTTS and install it locally.

git clone https://github.com/idiap/coqui-ai-TTS
cd coqui-ai-TTS
pip install -e .

Optional dependencies

The following extras allow the installation of optional dependencies:

Name Description
all All optional dependencies, except dev and docs
dev Development dependencies
docs Dependencies for building the documentation
notebooks Dependencies only used in notebooks
server Dependencies to run the TTS server
bn Bangla G2P
ja Japanese G2P
ko Korean G2P
zh Chinese G2P
languages All language-specific dependencies

You can install extras with one of the following commands:

pip install coqui-tts[server,ja]
pip install -e .[server,ja]

Platforms

If you are on Ubuntu (Debian), you can also run following commands for installation.

$ make system-deps  # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
$ make install

If you are on Windows, ๐Ÿ‘‘@GuyPaddock wrote installation instructions here (note that these are out of date, e.g. you need to have at least Python 3.9).

Docker Image

You can also try TTS without install with the docker image. Simply run the following command and you will be able to run TTS without installing it.

docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
python3 TTS/server/server.py --list_models #To get the list of available models
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server

You can then enjoy the TTS server here More details about the docker images (like GPU support) can be found here

Synthesizing speech by ๐ŸธTTS

๐Ÿ Python API

Running a multi-speaker and multi-lingual model

import torch
from TTS.api import TTS

# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"

# List available ๐ŸธTTS models
print(TTS().list_models())

# Init TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)

# Run TTS
# โ— Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language
# Text to speech list of amplitude values as output
wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en")
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")

Running a single speaker model

# Init TTS with the target model name
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to(device)

# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)

# Example voice cloning with YourTTS in English, French and Portuguese
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False).to(device)
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav")
tts.tts_to_file("Isso รฉ clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav")

Example voice conversion

Converting the voice in source_wav to the voice of target_wav

tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False).to("cuda")
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")

Example voice cloning together with the voice conversion model.

This way, you can clone voices by using any model in ๐ŸธTTS.

tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
tts.tts_with_vc_to_file(
    "Wie sage ich auf Italienisch, dass ich dich liebe?",
    speaker_wav="target/speaker.wav",
    file_path="output.wav"
)

Example text to speech using Fairseq models in ~1100 languages ๐Ÿคฏ.

For Fairseq models, use the following name format: tts_models/<lang-iso_code>/fairseq/vits. You can find the language ISO codes here and learn about the Fairseq models here.

# TTS with fairseq models
api = TTS("tts_models/deu/fairseq/vits")
api.tts_to_file(
    "Wie sage ich auf Italienisch, dass ich dich liebe?",
    file_path="output.wav"
)

Command-line tts

Synthesize speech on command line.

You can either use your trained model or choose a model from the provided list.

If you don't specify any models, then it uses LJSpeech based English model.

Single Speaker Models

  • List provided models:

    $ tts --list_models
    
  • Get model info (for both tts_models and vocoder_models):

    • Query by type/name: The model_info_by_name uses the name as it from the --list_models.

      $ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
      

      For example:

      $ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
      $ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
      
    • Query by type/idx: The model_query_idx uses the corresponding idx from --list_models.

      $ tts --model_info_by_idx "<model_type>/<model_query_idx>"
      

      For example:

      $ tts --model_info_by_idx tts_models/3
      
    • Query info for model info by full name:

      $ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
      
  • Run TTS with default models:

    $ tts --text "Text for TTS" --out_path output/path/speech.wav
    
  • Run TTS and pipe out the generated TTS wav file data:

    $ tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
    
  • Run a TTS model with its default vocoder model:

    $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
    

    For example:

    $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
    
  • Run with specific TTS and vocoder models from the list:

    $ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
    

    For example:

    $ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
    
  • Run your own TTS model (Using Griffin-Lim Vocoder):

    $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
    
  • Run your own TTS and Vocoder models:

    $ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
        --vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
    

Multi-speaker Models

  • List the available speakers and choose a <speaker_id> among them:

    $ tts --model_name "<language>/<dataset>/<model_name>"  --list_speaker_idxs
    
  • Run the multi-speaker TTS model with the target speaker ID:

    $ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>"  --speaker_idx <speaker_id>
    
  • Run your own multi-speaker TTS model:

    $ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
    

Voice Conversion Models

$ tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>

Directory Structure

|- notebooks/       (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|- utils/           (common utilities.)
|- TTS
    |- bin/             (folder for all the executables.)
      |- train*.py                  (train your target model.)
      |- ...
    |- tts/             (text to speech models)
        |- layers/          (model layer definitions)
        |- models/          (model definitions)
        |- utils/           (model specific utilities.)
    |- speaker_encoder/ (Speaker Encoder models.)
        |- (same)
    |- vocoder/         (Vocoder models.)
        |- (same)

coqui-ai-tts's People

Contributors

a-froghyar avatar adonispujols avatar agrinh avatar akx avatar aya-aljafari avatar ayushexel avatar bgerazov avatar edresson avatar eginhard avatar erogol avatar freds0 avatar gerazov avatar gorkemgoknar avatar kaiidams avatar kirianguiller avatar lexkoro avatar manmay-nakhashi avatar mic92 avatar mittimithai avatar nmstoker avatar omahs avatar p0p4k avatar reuben avatar rishikksh20 avatar sanjaesc avatar synesthesiam avatar thllwg avatar thorstenmueller avatar twerkmeister avatar weberjulian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

coqui-ai-tts's Issues

[Bug] Python 3.12 is not supported

Describe the bug

Hi thanks a lot to make a fork and maintain this
in Fedora 39 ( next will be python 3.13 i think) with
python --version
Python 3.12.2

i canniot install because of
pip install coqui-tts ๎‚ฒโฌข [fedora-toolbox:39]๎‚ด
Defaulting to user installation because normal site-packages is not writeable
ERROR: Ignored the following versions that require a different python version: 0.22.1 Requires-Python <3.12,>=3.9.0
ERROR: Could not find a version that satisfies the requirement coqui-tts (from versions: none)
ERROR: No matching distribution found for coqui-tt

To Reproduce

You can create or run a docker image of fedora 39 with https://github.com/containers/toolbox and try to install coqui-tts

Expected behavior

No response

Logs

No response

Environment

๎‚ถ 02:29 ๎‚ฐ python3 collect_env_info.py                                                                                                                                                                               ๎‚ฒโฌข [fedora-toolbox:39]๎‚ด 
Traceback (most recent call last):
  File "/var/home/yodatak/Projets/public/TTS/TTS/collect_env_info.py", line 7, in <module>
    import torch
ModuleNotFoundError: No module named 'torch'

Additional context

No response

[Bug] Server does not support fairseq models

Describe the bug

It is not possible to run the server.py with the fairseq

To Reproduce

docker run --rm -it -p 5002:5002 --platform linux/amd64 --entrypoint /bin/bash ghcr.io/idiap/coqui-tts
python3 TTS/server/server.py --model_name "tts_models/crh/fairseq/vits"

Expected behavior

No response

Logs

File "/root/TTS/server/server.py", line 111, in <module>
    synthesizer = Synthesizer(
  File "/root/TTS/utils/synthesizer.py", line 96, in __init__
    self._load_tts(tts_checkpoint, tts_config_path, use_cuda)
  File "/root/TTS/utils/synthesizer.py", line 186, in _load_tts
    self.tts_config = load_config(tts_config_path)
  File "/root/TTS/config/__init__.py", line 85, in load_config
    ext = os.path.splitext(config_path)[1]
  File "/usr/lib/python3.10/posixpath.py", line 118, in splitext
    p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType

Environment

Docker container

Additional context

No response

[Bug] Cloning voice add sometimes a noise

Describe the bug

The final result of a Italian (but also English) cloning voice add sometimes a annoyed noise.

To Reproduce

This wetransfer:
https://wetransfer.com/downloads/f8c9dd0ee069e3828d60ad68415a601220240707220325/c77f5f0fbf319e01a8e0e1031c55076b20240707220352/964660

contain the reference file "mes1603.wav" and the final result "mes1603 (sample result).wav"

the noise is present expecially at 20-24 seconds position.

Expected behavior

A clone voice without add any noise (the reference file don't contain noise)

Logs

No response

Environment

TTS Version: xttsv2_2.0.3
PyTorch version: 2.2.1
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 11 Pro
GCC version: Could not collect

Python version: 3.11.0 | packaged by conda-forge | (main, Oct 25 2022, 06:12:32) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1060 6GB
Nvidia driver version: 537.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture=9
CurrentClockSpeed=1600
DeviceID=CPU0
Family=205
L2CacheSize=11776
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=1600
Name=13th Gen Intel(R) Core(TM) i5-13500T
ProcessorType=3
Revision=

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnxruntime==1.18.1
[pip3] onnxruntime-gpu==1.18.1
[pip3] rotary-embedding-torch==0.6.4
[pip3] torch==2.2.1
[pip3] torch-stoi==0.2.1
[pip3] torchaudio==2.2.1
[pip3] torchcrepe==0.0.23
[pip3] torchvision==0.17.1
[conda] Could not collectpython.exe -m torch.utils.collect_env

Additional context

The TTS version is xttsv2_2.0.3

[Bug] Training XTTSv2 leads to weird training lags

Describe the bug

Hello, training XTTSv2 leads to weird training lags - training gets stuck with no errors

with using DDP
x6 RTX a6000 and 512GB RAM
Here is monitoring GPU load graph. Purple - gpu0, green - gpu1 (all the rest GPUs behave like gpu1)
image

Without DDP
image

Tried different dataset sizes - 2500hrs, 250hrs - result remains the same

I think there's some kind of error in Trainer or in xtts scripts maybe, don't know where to dig, thank you
no swap memory usage, no cpu overloading, no RAM overloading (by clearml, htop and top at least)
disk is fast NVME

To Reproduce

python -m trainer.distribute --script recipes/ljspeech/xtts_v2/train_gpt_xtts.py --gpus 0,1,2,3,4,5
python -m trainer.distribute --script recipes/ljspeech/xtts_v2/train_gpt_xtts.py --gpus 0,1
python3 recipes/ljspeech/xtts_v2/train_gpt_xtts.py

Expected behavior

No response

Logs

No response

Environment

TTS 0.24.1
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.54.03              Driver Version: 535.54.03    CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX A6000               On  | 00000000:01:00.0 Off |                  Off |
| 46%   70C    P2             229W / 300W |  32382MiB / 49140MiB |     91%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA RTX A6000               On  | 00000000:25:00.0 Off |                  Off |
| 42%   68C    P2             246W / 300W |  27696MiB / 49140MiB |     77%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA RTX A6000               On  | 00000000:41:00.0 Off |                  Off |
| 38%   67C    P2             256W / 300W |  27640MiB / 49140MiB |     63%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   3  NVIDIA RTX A6000               On  | 00000000:81:00.0 Off |                  Off |
| 39%   67C    P2             245W / 300W |  27640MiB / 49140MiB |     67%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   4  NVIDIA RTX A6000               On  | 00000000:A1:00.0 Off |                  Off |
| 46%   70C    P2             239W / 300W |  27620MiB / 49140MiB |     66%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   5  NVIDIA RTX A6000               On  | 00000000:C2:00.0 Off |                  Off |
| 30%   31C    P8              17W / 300W |      3MiB / 49140MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

Additional context

No response

[Feature request] Support latest version of networkx

๐Ÿš€ Feature Description

I am working on a project which uses llama-index-core, but it depends on a version of networkx which is incompatible with coqui-tts.
and llama-index-core (0.10.51) depends on networkx (>=3.0), llama-index-core (>=0.10.37.post1,<0.11.0) requires networkx (>=3.0). And because llama-index-llms-azure-openai (>=0.1.5,<0.2.0) requires networkx (>=3.0) or llama-index-core (>=0.10.37.post1,<0.11.0) (5), llama-index-llms-azure-openai (>=0.1.5,<0.2.0) requires networkx (>=3.0) And because coqui-tts (>=0.24.1,<0.25.0) requires networkx (>=2.5.0,<3.0.0) (1),

Solution

Could you please consider supporting the current version of networkx?

[Feature request] Text for synthesis needs to be normalized for languages with diacritics

Text for synthesis needs to be normalized for languages with diacritics or synthesis will be incorrect under certain ircumstances.

For diacritics, like German with its umlauts (รครถรผ), there are often at least two ways to represent them in Unicode text: precomposed (a single code point: รค) and decomposed (a base code point modified by another: a + ยจ). Some text sources, like piping a string into the tts command via xargs sourced from a text file may not convert from decomposed to precomposed. This is a problem, because the models I tested (i.e. "thorsten/tacotron2-DDC") only synthesize an umlaut in the precomposed form. They will just ignore the diacritics characters otherwise, synthesizing the base letter.

Iโ€™m not a Python dev. A hacky way of fixing this would be to modify "synthesize.py":

import unicodedata
โ€ฆ
args = parser.parse_args()
args.text = unicodedata.normalize('NFC', args.text)

Alternatively we could find some other way to make sure that the models are always supplied tokens that they can synthesize.

The text conversion could be optional via a command line argument.

[Bug] logging error

Describe the bug

This appears on the log while doing synth

https://github.com/idiap/coqui-ai-TTS/blob/dev/TTS/utils/synthesizer.py#L506-L507

To Reproduce

I'm doing

wavs = synthesizer.tts(text, speaker_name=speaker_idx)

with MLS_20 as speaker_name and text of "<Imil sseeiss millones ciento sseesseentaiocho mil sseeteecientos ochentaitrรฉs>"

Expected behavior

Synth without errors

Logs

Traceback (most recent call last):
  File "/root/.local/lib/python3.10/site-packages/flask/app.py", line 1473, in wsgi_app
    response = self.full_dispatch_request()
  File "/root/.local/lib/python3.10/site-packages/flask/app.py", line 882, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/root/.local/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
    rv = self.dispatch_request()
  File "/root/.local/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "/app/main.py", line 149, in tts
    wavs = synthesizer.tts(text, speaker_name=speaker_idx)
  File "/root/.local/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 507, in tts
    logger.info("Real-time factor: %.3f", process_time / audio_time)
ZeroDivisionError: float division by zero

Environment

{
    "CUDA": {
        "GPU": [],
        "available": false,
        "version": "12.1"
    },
    "Packages": {
        "PyTorch_debug": false,
        "PyTorch_version": "2.3.1+cu121",
        "TTS": "0.24.1",
        "numpy": "1.25.2"
    },
    "System": {
        "OS": "Linux",
        "architecture": [
            "64bit",
            ""
        ],
        "processor": "x86_64",
        "python": "3.10.11",
        "version": "#1 SMP Tue Jun 18 14:00:06 UTC 2024"
    }
}

Additional context

No response

Add Gradio 4.28.x support

๐Ÿš€ Gradio 4.28.x support

The latest Gradio versions do not work: https://pypi.org/project/gradio/#history
Everything is okay till 4.26.0, then it breaks with 4.27.0 and 4.28.x.
I certainly don't consider this a bug; you are supporting the still recent gradio 4.26.0. Thanks for the updated Python 3.12 fork!

Solution

Update dependencies to work with the latest Gradio versions.

Alternative Solutions

Use a slightly older version of Gradio; 4.26.0 or below.

Additional context

Log:

Resolving dependencies...
    Because spacy[ja] (3.7.2) depends on typer (>=0.3.0,<0.10.0)
 and spacy[ja] (3.7.1) depends on typer (>=0.3.0,<0.10.0), spacy[ja] (3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.7.0) depends on typer (>=0.3.0,<0.10.0), spacy[ja] (3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.6.1) depends on typer (>=0.3.0,<0.10.0)
 and spacy[ja] (3.6.0) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0), spacy[ja] (3.6.0 || 3.6.1 || 3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).
    And because spacy[ja] (3.5.4) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0)
 and spacy[ja] (3.5.3) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0), spacy[ja] (3.5.3 || 3.5.4 || 3.6.0 || 3.6.1 || 3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).
    And because spacy[ja] (3.5.2) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0)
 and spacy[ja] (3.5.1) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0), spacy[ja] (3.5.1 || 3.5.2 || 3.5.3 || 3.5.4 || 3.6.0 || 3.6.1 || 3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).
    And because spacy[ja] (3.5.0) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0)
 and spacy[ja] (3.4.4) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0), spacy[ja] (3.4.4 || 3.5.0 || 3.5.1 || 3.5.2 || 3.5.3 || 3.5.4 || 3.6.0 || 3.6.1 || 3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).
    And because spacy[ja] (3.4.3) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0)
 and spacy[ja] (3.4.2) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0), spacy[ja] (3.4.2 || 3.4.3 || 3.4.4 || 3.5.0 || 3.5.1 || 3.5.2 || 3.5.3 || 3.5.4 || 3.6.0 || 3.6.1 || 3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).
    And because spacy[ja] (3.4.1) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.10.0)
 and spacy[ja] (3.4.0) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.10.0), spacy[ja] (3.4.0 || 3.4.1 || 3.4.2 || 3.4.3 || 3.4.4 || 3.5.0 || 3.5.1 || 3.5.2 || 3.5.3 || 3.5.4 || 3.6.0 || 3.6.1 || 3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).
    And because spacy[ja] (3.3.3) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.3.2) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (3.3.2 || 3.3.3 || 3.4.0 || 3.4.1 || 3.4.2 || 3.4.3 || 3.4.4 || 3.5.0 || 3.5.1 || 3.5.2 || 3.5.3 || 3.5.4 || 3.6.0 || 3.6.1 || 3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).
    And because spacy[ja] (3.3.1) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.3.0) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (3.3.0 || 3.3.1 || 3.3.2 || 3.3.3 || 3.4.0 || 3.4.1 || 3.4.2 || 3.4.3 || 3.4.4 || 3.5.0 || 3.5.1 || 3.5.2 || 3.5.3 || 3.5.4 || 3.6.0 || 3.6.1 || 3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).
(1) So, because spacy[ja] (3.2.6) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.2.5) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (3.2.5 || 3.2.6 || 3.3.0 || 3.3.1 || 3.3.2 || 3.3.3 || 3.4.0 || 3.4.1 || 3.4.2 || 3.4.3 || 3.4.4 || 3.5.0 || 3.5.1 || 3.5.2 || 3.5.3 || 3.5.4 || 3.6.0 || 3.6.1 || 3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).

    Because no versions of spacy match >3,<3.0.1 || >3.0.1,<3.0.2 || >3.0.2,<3.0.3 || >3.0.3,<3.0.4 || >3.0.4,<3.0.5 || >3.0.5,<3.0.6 || >3.0.6,<3.0.7 || >3.0.7,<3.0.8 || >3.0.8,<3.0.9 || >3.0.9,<3.1.0 || >3.1.0,<3.1.1 || >3.1.1,<3.1.2 || >3.1.2,<3.1.3 || >3.1.3,<3.1.4 || >3.1.4,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4,<4.0.0.dev0 || >4.0.0.dev0,<4.0.0.dev1 || >4.0.0.dev1,<4.0.0.dev2 || >4.0.0.dev2,<4.0.0.dev3 || >4.0.0.dev3
 and spacy[ja] (4.0.0.dev0) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0), spacy[ja] (>3,<3.0.1 || >3.0.1,<3.0.2 || >3.0.2,<3.0.3 || >3.0.3,<3.0.4 || >3.0.4,<3.0.5 || >3.0.5,<3.0.6 || >3.0.6,<3.0.7 || >3.0.7,<3.0.8 || >3.0.8,<3.0.9 || >3.0.9,<3.1.0 || >3.1.0,<3.1.1 || >3.1.1,<3.1.2 || >3.1.2,<3.1.3 || >3.1.3,<3.1.4 || >3.1.4,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4,<4.0.0.dev1 || >4.0.0.dev1,<4.0.0.dev2 || >4.0.0.dev2,<4.0.0.dev3 || >4.0.0.dev3) requires pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).
    And because spacy[ja] (4.0.0.dev1) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0)
 and spacy[ja] (4.0.0.dev2) depends on typer (>=0.3.0,<0.10.0), spacy[ja] (>3,<3.0.1 || >3.0.1,<3.0.2 || >3.0.2,<3.0.3 || >3.0.3,<3.0.4 || >3.0.4,<3.0.5 || >3.0.5,<3.0.6 || >3.0.6,<3.0.7 || >3.0.7,<3.0.8 || >3.0.8,<3.0.9 || >3.0.9,<3.1.0 || >3.1.0,<3.1.1 || >3.1.1,<3.1.2 || >3.1.2,<3.1.3 || >3.1.3,<3.1.4 || >3.1.4,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4,<4.0.0.dev3 || >4.0.0.dev3) requires pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (4.0.0.dev3) depends on typer (>=0.3.0,<0.10.0)
 and spacy[ja] (3.0.0) depends on pydantic (>=1.7.1,<1.8.0), spacy[ja] (>=3,<3.0.1 || >3.0.1,<3.0.2 || >3.0.2,<3.0.3 || >3.0.3,<3.0.4 || >3.0.4,<3.0.5 || >3.0.5,<3.0.6 || >3.0.6,<3.0.7 || >3.0.7,<3.0.8 || >3.0.8,<3.0.9 || >3.0.9,<3.1.0 || >3.1.0,<3.1.1 || >3.1.1,<3.1.2 || >3.1.2,<3.1.3 || >3.1.3,<3.1.4 || >3.1.4,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.0.1) depends on pydantic (>=1.7.1,<1.8.0)
 and spacy[ja] (3.0.2) depends on pydantic (>=1.7.1,<1.8.0), spacy[ja] (>=3,<3.0.3 || >3.0.3,<3.0.4 || >3.0.4,<3.0.5 || >3.0.5,<3.0.6 || >3.0.6,<3.0.7 || >3.0.7,<3.0.8 || >3.0.8,<3.0.9 || >3.0.9,<3.1.0 || >3.1.0,<3.1.1 || >3.1.1,<3.1.2 || >3.1.2,<3.1.3 || >3.1.3,<3.1.4 || >3.1.4,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.0.3) depends on pydantic (>=1.7.1,<1.8.0)
 and spacy[ja] (3.0.4) depends on pydantic (>=1.7.1,<1.8.0), spacy[ja] (>=3,<3.0.5 || >3.0.5,<3.0.6 || >3.0.6,<3.0.7 || >3.0.7,<3.0.8 || >3.0.8,<3.0.9 || >3.0.9,<3.1.0 || >3.1.0,<3.1.1 || >3.1.1,<3.1.2 || >3.1.2,<3.1.3 || >3.1.3,<3.1.4 || >3.1.4,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.0.5) depends on pydantic (>=1.7.1,<1.8.0)
 and spacy[ja] (3.0.6) depends on pydantic (>=1.7.1,<1.8.0), spacy[ja] (>=3,<3.0.7 || >3.0.7,<3.0.8 || >3.0.8,<3.0.9 || >3.0.9,<3.1.0 || >3.1.0,<3.1.1 || >3.1.1,<3.1.2 || >3.1.2,<3.1.3 || >3.1.3,<3.1.4 || >3.1.4,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.0.7) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.0.8) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (>=3,<3.0.9 || >3.0.9,<3.1.0 || >3.1.0,<3.1.1 || >3.1.1,<3.1.2 || >3.1.2,<3.1.3 || >3.1.3,<3.1.4 || >3.1.4,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.0.9) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.1.0) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (>=3,<3.1.1 || >3.1.1,<3.1.2 || >3.1.2,<3.1.3 || >3.1.3,<3.1.4 || >3.1.4,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.1.1) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.1.2) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (>=3,<3.1.3 || >3.1.3,<3.1.4 || >3.1.4,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.1.3) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.1.4) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (>=3,<3.1.5 || >3.1.5,<3.1.6 || >3.1.6,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.1.5) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.1.6) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (>=3,<3.1.7 || >3.1.7,<3.2.0 || >3.2.0,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.1.7) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.2.0) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (>=3,<3.2.1 || >3.2.1,<3.2.2 || >3.2.2,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.2.1) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.2.2) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (>=3,<3.2.3 || >3.2.3,<3.2.4 || >3.2.4,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.2.3) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0)
 and spacy[ja] (3.2.4) depends on pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.9.0), spacy[ja] (>=3,<3.2.5 || >3.2.5,<3.2.6 || >3.2.6,<3.3.0 || >3.3.0,<3.3.1 || >3.3.1,<3.3.2 || >3.3.2,<3.3.3 || >3.3.3,<3.4.0 || >3.4.0,<3.4.1 || >3.4.1,<3.4.2 || >3.4.2,<3.4.3 || >3.4.3,<3.4.4 || >3.4.4,<3.5.0 || >3.5.0,<3.5.1 || >3.5.1,<3.5.2 || >3.5.2,<3.5.3 || >3.5.3,<3.5.4 || >3.5.4,<3.6.0 || >3.6.0,<3.6.1 || >3.6.1,<3.7.0 || >3.7.0,<3.7.1 || >3.7.1,<3.7.2 || >3.7.2,<3.7.4 || >3.7.4) requires pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) or typer (>=0.3.0,<0.10.0).
    And because spacy[ja] (3.2.5 || 3.2.6 || 3.3.0 || 3.3.1 || 3.3.2 || 3.3.3 || 3.4.0 || 3.4.1 || 3.4.2 || 3.4.3 || 3.4.4 || 3.5.0 || 3.5.1 || 3.5.2 || 3.5.3 || 3.5.4 || 3.6.0 || 3.6.1 || 3.7.0 || 3.7.1 || 3.7.2) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0) (1), spacy[ja] (>=3,<3.7.4 || >3.7.4) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0)
    And because spacy[ja] (3.7.4) depends on typer (>=0.3.0,<0.10.0), spacy[ja] (>=3) requires typer (>=0.3.0,<0.10.0) or pydantic (>=1.7.1,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0).
    And because gradio (4.27.0) depends on both pydantic (>=2.0) and typer (>=0.12,<1.0), gradio (4.27.0) is incompatible with spacy[ja] (>=3).
    And because coqui-tts (0.23.1) depends on spacy[ja] (>=3)
 and no versions of coqui-tts match >0.23.1,<0.24.0, gradio (4.27.0) is incompatible with coqui-tts (>=0.23.1,<0.24.0).
    So, because test-app depends on both coqui-tts (^0.23.1) and gradio (4.27.0), version solving failed.

[Bug] tts-server hangs

Describe the bug

If I type tts-server --model_name, it just hangs with no output. NO error no nothing.
Even tts-server -h does this.
The tts-server from coqui-ai/TTS repo works.

To Reproduce

git clone https://github.com/idiap/coqui-ai-TTS
cd coqui-ai-TTS
pip install -e .[server]
tts-server

Expected behavior

It should start the server.

Logs

If I interrupt with control+c after a while I get this:

(.venv) D:\code\python\coqui-ai-TTS>tts-server
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "D:\code\python\coqui-ai-TTS\.venv\Scripts\tts-server.exe\__main__.py", line 4, in <module>
  File "D:\code\python\coqui-ai-TTS\TTS\server\server.py", line 20, in <module>
    from TTS.utils.synthesizer import Synthesizer
  File "D:\code\python\coqui-ai-TTS\TTS\utils\synthesizer.py", line 12, in <module>
    from TTS.tts.configs.vits_config import VitsConfig
  File "D:\code\python\coqui-ai-TTS\TTS\tts\configs\vits_config.py", line 5, in <module>
    from TTS.tts.models.vits import VitsArgs, VitsAudioConfig
  File "D:\code\python\coqui-ai-TTS\TTS\tts\models\vits.py", line 34, in <module>
    from TTS.tts.utils.text.characters import BaseCharacters, BaseVocabulary, _characters, _pad, _phonemes, _punctuations
  File "D:\code\python\coqui-ai-TTS\TTS\tts\utils\text\__init__.py", line 1, in <module>
    from TTS.tts.utils.text.tokenizer import TTSTokenizer
  File "D:\code\python\coqui-ai-TTS\TTS\tts\utils\text\tokenizer.py", line 6, in <module>
    from TTS.tts.utils.text.phonemizers import DEF_LANG_TO_PHONEMIZER, get_phonemizer_by_name
  File "D:\code\python\coqui-ai-TTS\TTS\tts\utils\text\phonemizers\__init__.py", line 17, in <module>
    ESPEAK_LANGS = list(ESpeak.supported_languages().keys())
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\code\python\coqui-ai-TTS\TTS\tts\utils\text\phonemizers\espeak_wrapper.py", line 235, in supported_languages
    for count, line in enumerate(_espeak_exe(_DEF_ESPEAK_LIB, args, sync=True)):
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\code\python\coqui-ai-TTS\TTS\tts\utils\text\phonemizers\espeak_wrapper.py", line 80, in _espeak_exe
    res2 = list(res)
           ^^^^^^^^^
KeyboardInterrupt
^C

Environment

{
    "CUDA": {
        "GPU": [],
        "available": false,
        "version": null
    },
    "Packages": {
        "PyTorch_debug": false,
        "PyTorch_version": "2.3.0+cpu",
        "TTS": "0.23.1",
        "numpy": "1.26.4"
    },
    "System": {
        "OS": "Windows",
        "architecture": [
            "64bit",
            "WindowsPE"
        ],
        "processor": "Intel64 Family 6 Model 94 Stepping 3, GenuineIntel",
        "python": "3.11.9",
        "version": "10.0.19045"
    }

Additional context

No response

[Feature request] Compatibility with transformers>=4.43.2

Hello, I am currently working with the new LLaMA 3.1 models by Meta and they require the newer versions of transformers, optimum, and accelerate. I ran into compatibility issues with XTTS regarding the version of transformers.

I personally use the inference streaming feature, and that's where I am having issues.

Here is an error log I got:

Traceback (most recent call last):
  File "C:\Users\eyein\OneDrive\Desktop\Files\Discord Bots\JenEva-3.0\cogs\rt_tts_cog.py", line 501, in text_to_speech
    for j, chunk in enumerate(chunks):
  File "C:\Users\eyein\miniconda3\envs\JenEva\Lib\site-packages\torch\utils\_contextlib.py", line 35, in generator_context
    response = gen.send(None)
               ^^^^^^^^^^^^^^
  File "C:\Users\eyein\miniconda3\envs\JenEva\Lib\site-packages\TTS\tts\models\xtts.py", line 657, in inference_stream
    gpt_generator = self.gpt.get_generator(
                    ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eyein\miniconda3\envs\JenEva\Lib\site-packages\TTS\tts\layers\xtts\gpt.py", line 602, in get_generator
    return self.gpt_inference.generate_stream(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eyein\miniconda3\envs\JenEva\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eyein\miniconda3\envs\JenEva\Lib\site-packages\TTS\tts\layers\xtts\stream_generator.py", line 117, in generate
    - [~generation.BeamSampleDecoderOnlyOutput]
                             ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\eyein\miniconda3\envs\JenEva\Lib\site-packages\transformers\generation\utils.py", line 489, in _prepare_attention_mask_for_generation
    torch.isin(elements=inputs, test_elements=pad_token_id).any()
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: isin() received an invalid combination of arguments - got (elements=Tensor, test_elements=int, ), but expected one of:
 * (Tensor elements, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
 * (Number element, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
 * (Tensor elements, Number test_element, *, bool assume_unique, bool invert, Tensor out)

ERROR: None

[Bug] trying to use `from TTS.api import TTS` results in bangla not found?

Describe the bug

I'm getting ModuleNotFoundError: No module named 'bangla' when trying to use from TTS.api import TTS, before actually trying to do anything with the library.

To Reproduce

  1. have coqui-ai-TTS installed
  2. create a file called thing.py with just the line:
from TTS.api import TTS
  1. try to run that file with python 3.12:
$ python thing.py
Traceback (most recent call last):
  File "/home/friend/repos/smol-k8s-lab/thing.py", line 1, in <module>
    from TTS.api import TTS
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/api.py", line 11, in <module>
    from TTS.utils.synthesizer import Synthesizer
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/utils/synthesizer.py", line 12, in <module>
    from TTS.tts.configs.vits_config import VitsConfig
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/configs/vits_config.py", line 5, in <module>
    from TTS.tts.models.vits import VitsArgs, VitsAudioConfig
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/models/vits.py", line 34, in <module>
    from TTS.tts.utils.text.characters import BaseCharacters, BaseVocabulary, _characters, _pad, _phonemes, _punctuations
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/utils/text/__init__.py", line 1, in <module>
    from TTS.tts.utils.text.tokenizer import TTSTokenizer
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/utils/text/tokenizer.py", line 6, in <module>
    from TTS.tts.utils.text.phonemizers import DEF_LANG_TO_PHONEMIZER, get_phonemizer_by_name
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/utils/text/phonemizers/__init__.py", line 1, in <module>
    from TTS.tts.utils.text.phonemizers.bangla_phonemizer import BN_Phonemizer
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/utils/text/phonemizers/bangla_phonemizer.py", line 3, in <module>
    from TTS.tts.utils.text.bangla.phonemizer import bangla_text_to_phonemes
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/utils/text/bangla/phonemizer.py", line 3, in <module>
    import bangla
ModuleNotFoundError: No module named 'bangla'

Expected behavior

no errors

Logs

Traceback (most recent call last):
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/bin/smol-tts", line 6, in <module>
    sys.exit(tts_gen())
             ^^^^^^^^^
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/friend/repos/smol-k8s-lab/smol_tts/__init__.py", line 157, in tts_gen
    from .audio_generation import AudioGenerator
  File "/home/friend/repos/smol-k8s-lab/smol_tts/audio_generation.py", line 10, in <module>
    from TTS.api import TTS
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/api.py", line 11, in <module>
    from TTS.utils.synthesizer import Synthesizer
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/utils/synthesizer.py", line 12, in <module>
    from TTS.tts.configs.vits_config import VitsConfig
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/configs/vits_config.py", line 5, in <module>
    from TTS.tts.models.vits import VitsArgs, VitsAudioConfig
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/models/vits.py", line 34, in <module>
    from TTS.tts.utils.text.characters import BaseCharacters, BaseVocabulary, _characters, _pad, _phonemes, _punctuations
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/utils/text/__init__.py", line 1, in <module>
    from TTS.tts.utils.text.tokenizer import TTSTokenizer
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/utils/text/tokenizer.py", line 6, in <module>
    from TTS.tts.utils.text.phonemizers import DEF_LANG_TO_PHONEMIZER, get_phonemizer_by_name
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/utils/text/phonemizers/__init__.py", line 1, in <module>
    from TTS.tts.utils.text.phonemizers.bangla_phonemizer import BN_Phonemizer
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/utils/text/phonemizers/bangla_phonemizer.py", line 3, in <module>
    from TTS.tts.utils.text.bangla.phonemizer import bangla_text_to_phonemes
  File "/home/friend/.cache/pypoetry/virtualenvs/smol-k8s-lab-rxIC-Q7--py3.12/lib/python3.12/site-packages/TTS/tts/utils/text/bangla/phonemizer.py", line 3, in <module>
    import bangla
ModuleNotFoundError: No module named 'bangla'

Environment

{
    "CUDA": {
        "GPU": [
            "NVIDIA GeForce GTX 1070 Ti"
        ],
        "available": true,
        "version": "12.1"
    },
    "Packages": {
        "PyTorch_debug": false,
        "PyTorch_version": "2.3.0+cu121",
        "TTS": "0.23.1",
        "numpy": "1.26.4"
    },
    "System": {
        "OS": "Linux",
        "architecture": [
            "64bit",
            "ELF"
        ],
        "processor": "",
        "python": "3.12.3",
        "version": "#1 SMP PREEMPT_DYNAMIC Debian 6.1.85-1 (2024-04-11)"
    }
}

Additional context

thanks again for maintaining this project ๐Ÿ™

Add access to ModelManager and SpeakerManager

Hey there,

thanks a lot for maintaining coqui TTS, it's really awesome to have this.

I'm the dev of RealtimeTTS project. In my implementation I'm using these:

from TTS.utils.manage import ModelManager
ModelManager().download_model(model_name)

and

from TTS.utils.generic_utils import get_user_data_dir
from TTS.config import load_config
from TTS.tts.models import setup_model as setup_tts_model
from TTS.tts.layers.xtts.xtts_manager import SpeakerManager

I noticed that these are not available any more in the fork.

Can you please add these back, so we can use ModelManager, SpeakerManager etc directly from the TTS code?

Thanks a lot!

[Bug] HF transformers update breaks XTTS streaming

Describe the bug

huggingface/transformers#30624 broke the XTTS streaming code (https://github.com/idiap/coqui-ai-TTS/blob/df088e99dfda8976c47235626d3afb7d7d70fea2/TTS/tts/layers/xtts/stream_generator.py) as reported in huggingface/transformers#31040. The streaming code needs to be updated accordingly.

To Reproduce

https://coqui-tts.readthedocs.io/en/latest/models/xtts.html#streaming-manually

Expected behavior

No response

Logs

Loading model...
DEBUG:fsspec.local:open file: ~/.local/share/tts/tts_models--multilingual--multi-dataset--xtts_v2/model.pth
Computing speaker latents...
Inference...
~/applications/miniconda3/envs/coqui-3.12/lib/python3.12/site-packages/TTS/tts/layers/xtts/stream_generator.py:138: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
  warnings.warn(
Traceback (most recent call last):
  File "~/projects/testing/xtts.py", line 33, in <module>
    for i, chunk in enumerate(chunks):
  File "~/applications/miniconda3/envs/coqui-3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
    response = gen.send(None)
               ^^^^^^^^^^^^^^
  File "~/applications/miniconda3/envs/coqui-3.12/lib/python3.12/site-packages/TTS/tts/models/xtts.py", line 657, in inference_stream
    gpt_generator = self.gpt.get_generator(
                    ^^^^^^^^^^^^^^^^^^^^^^^
  File "~/applications/miniconda3/envs/coqui-3.12/lib/python3.12/site-packages/TTS/tts/layers/xtts/gpt.py", line 602, in get_generator
    return self.gpt_inference.generate_stream(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/applications/miniconda3/envs/coqui-3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "~/applications/miniconda3/envs/coqui-3.12/lib/python3.12/site-packages/TTS/tts/layers/xtts/stream_generator.py", line 186, in generate
    model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation(
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/applications/miniconda3/envs/coqui-3.12/lib/python3.12/site-packages/transformers/generation/utils.py", line 473, in _prepare_attention_mask_for_generation
    torch.isin(elements=inputs, test_elements=pad_token_id).any()
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: isin() received an invalid combination of arguments - got (test_elements=int, elements=Tensor, ), but expected one of:
 * (Tensor elements, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
 * (Number element, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
 * (Tensor elements, Number test_element, *, bool assume_unique, bool invert, Tensor out)

Environment

{
    "CUDA": {
        "GPU": [],
        "available": false,
        "version": "12.1"
    },
    "Packages": {
        "PyTorch_debug": false,
        "PyTorch_version": "2.3.0+cu121",
        "TTS": "0.24.0",
        "numpy": "1.26.4"
    },
    "System": {
        "OS": "Linux",
        "architecture": [
            "64bit",
            "ELF"
        ],
        "processor": "x86_64",
        "python": "3.12.3",
        "version": "#35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May  7 09:00:52 UTC 2"
    }
}

Additional context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.