Git Product home page Git Product logo

vivekuppal / transcribe Goto Github PK

View Code? Open in Web Editor NEW
168.0 9.0 40.0 138 MB

Transcribe is a real time transcription, conversation, Language learning platform. It provides live transcripts from microphone and speaker. It generates a suggested conversation response using OpenAI's GPT API. It will read out the responses, simulating a real live conversation in English or another language.

Home Page: https://abhinavuppal1.github.io/

License: MIT License

Python 99.17% Batchfile 0.83%
chatgpt openai transcribe whisper live-transcript live-transcription

transcribe's Introduction

We are here to help. File [issues](https://github.com/vivekuppal/transcribe/issues) for any problems and we will resolve them.

Source Code Install Video

Thanks to Fahd Mirza for the installation video for Transcribe. Subscribe to his Youtube channel and read his blog.

Watch the video

๐Ÿ‘‚๐Ÿป๏ธ Transcribe โœ๐Ÿผ๏ธ

Join the community Share your email in an issue to receive the invite to the community channel.

Transcribe provides real time transcription for microphone and speaker output. It generates a suggested conversation response using OpenAI's chatGPT (or OpenAI API compatible provider) relevant to the current conversation.

Why Transcribe over other Speech to Text apps

  • Use Most of the functionality for FREE
  • Multi Lingual support
  • Choose between GPT 4o, 4, 3.5 or other inference models from OpenAI, or a plethora of inference models from Together
  • Streaming LLM responses instead of waiting for a complete response
  • Upto date with the latest OpenAI libraries
  • Get LLM responses for selected text
  • Install and use without python or other dependencies
  • Security Features
  • Choose Audio Inputs (Speaker or Mic or Both)
  • Speech to Text
    • Offline - FREE
    • Online - paid
      • OpenAI Whisper - (Encouraged)
      • Deepgram
  • Chat Inference Engines
    • OpenAI
    • Together
    • Perplexity
    • Azure hosted OpenAI - Some users have reported requiring code changes to make Azure work. Feedback appreciated.
  • Conversation Summary
  • Prompt customization
  • Save chat history
  • Response Audio

Response Generation

Response generation requires a paid account with an OpenAI API key. Encouraged or Deepgram ($200 free credits) or Together ($25 free Credits) or Azure

Based on feedback from users, OpenAI gpt-4o model provides the best response generation capabilities. Earlier models work ok, but can sometimes provide inaccurate answers if there is not enough conversation content at the beginning. Together provides a large selection of Inference models. Any of these can be used by making changes to override.yaml file.

When using OpenAI, without the OpenAI key, using continuous response or any action that requires interaction with the online LLM gives an error similar to below

Error when attempting to get a response from LLM.
Error code: 401 - {'error': {'message': 'Incorrect API key provided: API_KEY. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

With a valid OpenAI key and no available credits, using continuous response gives an error similar to below

Error when attempting to get a response from LLM. Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}} 

Alt text On Demand Features Alt text

We develop mutually beneficial features on demand.

Create an issue in the repo to request mutually beneficial on demand features.

Connect on LinkedIn to discuss further.

Features

Security

  • Secret scanning: Continuous Integration with GitGuardian
  • Static Code Analysis: Regular static code scan scan with Bandit
  • Static Code Analysis: Incorporate Snyk for static analysis of code on every check in
  • Secure Transmission: All secure communications for any network communications
  • Dependency Security: All strictest security features enabled in the Github repo

Developer Guide

Developer Guide

Software Installation

Note that installation files are generated every few weeks. Generated binaries will almost always trail the latest codebase available in the repo.

Latest Binary

  • Generated: 2024-06-02
  • Git version: 3b3502d
  1. Install ffmpeg

First, install Chocolatey, a package manager for Windows.

Open PowerShell as Administrator and run the following command:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))

Once Chocolatey is installed, install FFmpeg by running the following command in PowerShell:

choco install ffmpeg

Run these commands in a PowerShell window with administrator privileges. For any issues during the installation, visit the official Chocolatey and FFmpeg websites for troubleshooting.

  1. Download the zip file from
https://drive.google.com/file/d/1kcgGbTKxZqgbJOShL0bc3Do34lLouYxF/view?usp=drive_link


Using GPU provides 2-3 times faster reseponse time depending on processing power of GPU.
  1. Unzip the files in a folder.

  2. (Optional) Add Open API key in override.yaml file in the transcribe directory:

    Create an OpenAI account or account from another provider

    Add OpenAI API key in override.yaml file manually. Open in a text editor and add these lines:

OpenAI:
   api_key: 'API_KEY'

Replace "API_KEY" with the actual OpenAI API key. Save the file.

  1. Execute the file transcribe\transcribe.exe\transcribe.exe

๐Ÿ†• Best Performance with GPU ๐Ÿฅ‡

Application performs best with GPU support.

Make sure you have installed CUDA libraries if you have GPU: https://developer.nvidia.com/cuda-downloads

Application will automatically detect and use GPU once CUDA libraries are installed.

๐Ÿ†• Getting Started ๐Ÿฅ‡

Follow below steps to run transcribe on your local machine.

๐Ÿ“‹ Prerequisites

  • Python >=3.11.0
  • (Optional) An OpenAI API key (set up a paid OpenAI account)
  • Windows OS (Not tested on others)
  • FFmpeg

Steps to install FFmpeg on your system.

First, install Chocolatey, a package manager for Windows.

Open PowerShell as Administrator and run the following command:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))

Once Chocolatey is installed, install FFmpeg by running the following command in PowerShell:

choco install ffmpeg

Run these commands in a PowerShell window with administrator privileges. For any issues during the installation, visit the official Chocolatey and FFmpeg websites for troubleshooting.

๐Ÿ”ง Code Installation

  1. Clone transcribe repository:

    git clone https://github.com/vivekuppal/transcribe
    
  2. Run setup file

    setup.bat
    
  3. (Optional) Provide OpenAI API key in override.yaml file in the transcribe directory:

    Create the following section in override.yaml file

    OpenAI:
      api_key: 'API_KEY'

    Alter the line:

      api_key: 'API_KEY'
    

    Replace "API_KEY" with the actual OpenAI API key. Save the file.

๐ŸŽฌ Running Transcribe

Run the main script from app\transcribe\ folder:

python main.py

Upon initiation, Transcribe will begin transcribing microphone input and speaker output in real-time, optionally generating a suggested response based on the conversation. It is suggested to use continuous response feature after 1-2 minutes, once there is enough content in transcription window to provide enough context to the LLM.

๐Ÿ‘ค License ๐Ÿ“–

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿค Contributions ๐Ÿค

Contributions are welcome! Open issues or submit pull requests to improve Transcribe.

Videos

Acknowledgements

This project started out as a fork of ecoute. It has diverged significantly from the original implementation so we decided to remove the link to ecoute.

transcribe's People

Contributors

abhinavuppal1 avatar adarsha-gg avatar dependabot[bot] avatar iamadolphin avatar j0n0w1ns avatar lem0ke avatar mang0sw33t avatar sevask avatar vivekuppal avatar zarifpour avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

transcribe's Issues

Refer to models on the OpenAI site instead of storing them on our servers

Currently Transcribe allows use of tiny, base, small models of OpenAI.
In addition to these models OpenAI also publishes medium, large, large-v1, large-v2 models.
Location of these models is specified in https://github.com/vivekuppal/transcribe/blob/main/main.py

We can do the following

  • Update Transcribe source to refer to the models at their original location
  • Allow user to use medium, large, large-v1, large-v2 models using cmd line args, though give a warning that they would need powerful computers to use medium and higher models
  • Delete the models from our gdrive location

This has the added benefit that we do not have to update the models regularly on our end

Error code: 429 - Whisper AI (Free Tier) Issue and Potential Solution

Hi,

I ran into an issue that I think may have been addressed in issue 101 and 84 before.

Details:

  • I am using a free OpenAI account.
  • I added my API key to the overide.yaml file (the key is sandwiched between single quotes)
  • I am using python 3.11.7
  • OS: Windows 11
  • I am not using this on my main PC at the moment. I downloaded this by extracting the .zip file. I did not download the executable hosted on Google Drive.
    • Please let me know if you think using the .exe would provide data in the right window of the GUI.

Functionality

When I execute the command python main.py in my command prompt, the GUI opens successfully. The console displays the following information:
[INFO] Listening to sound from Microphone: #29 - Microphone (REDACTED)
[INFO] Adjusting for ambient noise from Default Mic. Please make some noise from the Default Mic...
[INFO] Completed ambient noise adjustment for Default Mic.
[INFO] Listening to sound from Speaker: #25 - Speakers (REDACTED) [Loopback]
[INFO] Adjusting for ambient noise from Default Speaker. Please play sound from Default Speaker...
[INFO] Completed ambient noise adjustment for Default Speaker.
[INFO] Whisper using GPU: True
READY

All things seem to start fine. The 'issue' comes when I attempt to use the application in a conversational setting. In the GUI, the window on the

  • left displays a transcription (not always correctly) of what was picked up by the microphone.
  • right displays "Welcome to Transcribe" at the top, and no text/response is populated. Other than the banner at the top, the window is completely blank.

After the application interprets what was picked up by the microphone, the command prompt displays the response:
Error when attempting to get a response from LLM. Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}} [INFO] Exiting gracefully..

When I view my usage details at https://platform.openai.com/usage, the balance displays $0.00. I take this to mean that generating 'responses' requires a paid account. FWIW, I am not too surprised by this.

Items of Consideration

How to execute Deepgram?

Hey I see Deepgram API in parameters file.

I have added the deepgram API in override file.

But I am not able to execute it?

What command should I use?

Not able to save API key in yaml file

I am not able to save my Open API key in parameters.yaml file. I get permission denied , file cannot be saved in Transcribe directory. Without using my API key, it is not answering any of the questions. I do get error message in powershell that API key is needed.

ModuleNotFoundError: No module named 'yaml' from main.py

[notice] A new release of pip is available: 23.2.1 -> 23.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
PS C:\WINDOWS\system32\transcribe> python main.py
Traceback (most recent call last):
File "C:\WINDOWS\system32\transcribe\main.py", line 8, in
import yaml
ModuleNotFoundError: No module named 'yaml'
PS C:\WINDOWS\system32\transcribe>

plz help me Operating as a standalone client [INFO] Adjusting for ambient noise from Default Mic. Please make some noise from the Default Mic... [INFO] Completed ambient noise adjustment for Default Mic. [INFO] Adjusting for ambient noise from Default Speaker. Please make or play some noise from the Default Speaker... [INFO] Completed ambient noise adjustment for Default Speaker. Using Open AI API for transcription. READY Traceback (most recent call last): File "C:\Users\Saira\transcribe\main.py", line 131, in <module> main() File "C:\Users\Saira\transcribe\main.py", line 121, in main lang_combobox.configure(command=model.change_lang) ^^^^^^^^^^^^^^^^^ AttributeError: 'APIWhisperTranscriber' object has no attribute 'change_lang'

Operating as a standalone client
[INFO] Adjusting for ambient noise from Default Mic. Please make some noise from the Default Mic...
[INFO] Completed ambient noise adjustment for Default Mic.
[INFO] Adjusting for ambient noise from Default Speaker. Please make or play some noise from the Default Speaker...
[INFO] Completed ambient noise adjustment for Default Speaker.
Using Open AI API for transcription.
READY
Traceback (most recent call last):
File "C:\Users\Saira\transcribe\main.py", line 131, in
main()
File "C:\Users\Saira\transcribe\main.py", line 121, in main
lang_combobox.configure(command=model.change_lang)
^^^^^^^^^^^^^^^^^
AttributeError: 'APIWhisperTranscriber' object has no attribute 'change_lang'

Not able to execute program

Getting this error when trying to run program

PS C:\WINDOWS\system32\transcribe> python main.py --api
Traceback (most recent call last):
File "C:\WINDOWS\system32\transcribe\main.py", line 10, in
from GPTResponder import GPTResponder
File "C:\WINDOWS\system32\transcribe\GPTResponder.py", line 5, in
import GlobalVars
File "C:\WINDOWS\system32\transcribe\GlobalVars.py", line 5, in
from audio_player import AudioPlayer
File "C:\WINDOWS\system32\transcribe\audio_player.py", line 9, in
import playsound
ModuleNotFoundError: No module named 'playsound'

Encountered a problem and could not open main.py

[INFO] Listening to sound from Microphone: #24 - Microphone Array (Technologie Intelยฎ Smart Sound)
[INFO] Listening to sound from Speaker: #22 - Enceintes (2- Realtek(R) Audio) [Loopback]
[INFO] Adjusting for ambient noise from Default Speaker. Please play sound from Default Speaker...
Traceback (most recent call last):
File "C:\Users\jin.DESKTOP-VD7S25E\transcribe\AudioRecorder.py", line 122, in adjust_for_noise
self.recorder.adjust_for_ambient_noise(self.source)
File "C:\Users\jin.DESKTOP-VD7S25E\transcribe\custom_speech_recognition_init_.py", line 437, in adjust_for_ambient_noise
assert source.stream is not None, "Audio source must be entered before adjusting, see documentation for AudioSource; are you using source outside of a with statement?"
AssertionError: Audio source must be entered before adjusting, see documentation for AudioSource; are you using source outside of a with statement?

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 211, in
main()
File "main.py", line 87, in main
global_vars = GlobalVars.TranscriptionGlobals()
File "C:\Users\jin.DESKTOP-VD7S25E\transcribe\GlobalVars.py", line 44, in init
self.speaker_audio_recorder = AudioRecorder.SpeakerRecorder()
File "C:\Users\jin.DESKTOP-VD7S25E\transcribe\AudioRecorder.py", line 208, in init
self.adjust_for_noise("Default Speaker",
File "C:\Users\jin.DESKTOP-VD7S25E\transcribe\AudioRecorder.py", line 122, in adjust_for_noise
self.recorder.adjust_for_ambient_noise(self.source)
File "C:\Users\jin.DESKTOP-VD7S25E\transcribe\custom_speech_recognition_init_.py", line 232, in exit
self.stream.close()
AttributeError: 'NoneType' object has no attribute 'close'

This problem occurs when opening main.py. How to solve it? Please

Do we need paid version of CHATGPT?

You exceeded your current quota, please check your plan and billing details.
string indices must be integers, not 'str'
You exceeded your current quota, please check your plan and billing details.
string indices must be integers, not 'str'
You exceeded your current quota, please check your plan and billing details.
string indices must be integers, not 'str'
You exceeded your current quota, please check your plan and billing details.
string indices must be integers, not 'str'
You exceeded your current quota, please check your plan and billing details.
string indices must be integers, not 'str'

Allow user to provide contextual info

Allow a user to provide contextual information so they can customize the responses they are getting.
E.g. A biologist participating in the conversation might want to get responses specific to certain fields of study.

Automatic deletion of audio transcript

In order for GPT to remember the chat context, i have a lengthy prompt, which it reads before listening to follow up questions. This is causing it to exceed the allowed token limit. Is there anyway you can add a new function that a user can select or not select, which will automatically delete audio transcript every 10 seconds.

Improve whisper.cpp based transcription

whisper.cpp based transcription currently loads the model in memory for every iteration of the transcription.
This can be easily improved by loading the model in memory only once. Saves a lot of disk IO and will result in much better transcription speeds.

GPT Responses

Is there any way we can save GPT responses so that we can see the history . This would help as sometimes the speaker input is so fast that GPT responses keep updating and we are not able to see all the previous responses.

Failure to Install Whisper dependencies and then TypeError

I have been using your Transcribe project for a project related to speech transcription. First of all, thank you for creating such a useful tool!

I have encountered an issue while trying to run the Transcribe project in my local environment. Whenever I execute the main.py script, I encounter the following error:

Traceback (most recent call last): File "main.py", line 8, in <module> from AudioTranscriber import AudioTranscriber File "D:\HelperProject\transcribe\AudioTranscriber.py", line 7, in <module> import whisper File "D:\HelperProject\transcribe\transcribeENV\lib\site-packages\whisper.py", line 69, in <module> libc = ctypes.CDLL(libc_name) File "C:\Users\Golla Prasoona\anaconda3\lib\ctypes\__init__.py", line 363, in __init__ if '/' in name or '\\' in name: TypeError: argument of type 'NoneType' is not iterable

I have tried various troubleshooting steps, such as upgrading pip, reinstalling the whisper package, and verifying the presence of whisper.py in the site-packages directory, but the issue persists.

My system environment is as follows:

Operating System: Windows 64 bit
Python Version: Python 3.8.8
Pip Version: pip 23.2.1

Error after the speaker verification

Getting following error after the speaker verification:

Using Open AI Whisper API for transcription.
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "C:\Users\Test\Desktop\transcribe-main\transcribe-main\app\transcribe\main.py", line 114, in
main()
File "C:\Users\Test\Desktop\transcribe-main\transcribe-main\app\transcribe\main.py", line 51, in main
handle_args_batch_tasks(args, global_vars)
File "C:\Users\Test\Desktop\transcribe-main\transcribe-main\app\transcribe\args.py", line 90, in handle_args_batch_tasks
interactions.params(args)
File "C:\Users\Test\Desktop\transcribe-main\transcribe-main\app\transcribe\interactions.py", line 81, in params
query_params = create_params(args)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Test\Desktop\transcribe-main\transcribe-main\app\transcribe\interactions.py", line 59, in create_params
ps = detect_ps()
^^^^^^^^^^^
File "C:\Users\Test\Desktop\transcribe-main\transcribe-main\app\transcribe\interactions.py", line 101, in detect_ps
subprocess.check_output(["powershell", "-c", "whoami"])
File "C:\Users\Test\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 466, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Test\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Test\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 1026, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\Test\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified
Exception ignored in atexit callback: <function exit_params at 0x000001D61CB3EB60>
Traceback (most recent call last):
File "C:\Users\Test\Desktop\transcribe-main\transcribe-main\app\transcribe\interactions.py", line 111, in exit_params
query_params = create_params(args=None)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Test\Desktop\transcribe-main\transcribe-main\app\transcribe\interactions.py", line 59, in create_params
ps = detect_ps()
^^^^^^^^^^^
File "C:\Users\Test\Desktop\transcribe-main\transcribe-main\app\transcribe\interactions.py", line 101, in detect_ps
subprocess.check_output(["powershell", "-c", "whoami"])
File "C:\Users\Test\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 466, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Test\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Test\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 1026, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\Test\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified

Unable to use multi-lingual model

I have base.pt model downloaded and trying to pass it as a parameter during execution with -m base but I'm getting:
Could not find the transcription model file: base.en.pt
Why it's trying to enforce english model?

Allow changing response language in the UI

Currently the response language of LLM can be set in the parameters.yaml.
Allow user to set the language using UI.
Users choice should be persisted across restarts of the application.

python main.py --api takes a long time

I have been running the code for around 10 minutes, and nothing has appeared. I saw some other people had the issue, but I am curious as to how long it should take. Here is my output:

[INFO] Listening to sound from Microphone: #9 - Microphone (High Definition Audio Device)
[INFO] Listening to sound from Speaker: #8 - Speakers (High Definition Audio Device) [Loopback]
[INFO] Adjusting for ambient noise from Default Speaker. Please play sound from Default Speaker...

It is stuck there, there is no error.

Special modifications

I am interested in the project to work on adding an additional feature that could benefit the project for everyone as well

I want you to add a feature and you will be given an amount of money
Better than hiring a programmer or freelancer to do it
You are the owners of the project and you deserve it

I do not know how to contact you and I did not see contact information

missing module in your package ModuleNotFoundError: No module named 'distutils'

PS C:\WINDOWS\system32\transcribe> pip install -r requirements.txt
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117
Collecting numpy==1.24.3 (from -r requirements.txt (line 1))
Using cached numpy-1.24.3.tar.gz (10.9 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
ERROR: Exception:
Traceback (most recent call last):
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\cli\base_command.py", line 180, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\cli\req_command.py", line 248, in wrapper
return func(self, options, args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\commands\install.py", line 377, in run
requirement_set = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\resolution\resolvelib\resolver.py", line 92, in resolve
result = self._result = resolver.resolve(
^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 397, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "C:\Program Files\Python312\Lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 173, in _add_to_criteria
if not criterion.candidates:
^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_vendor\resolvelib\structs.py", line 156, in bool
return bool(self._sequence)
^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 155, in bool
return any(self)
^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 143, in
return (c for c in iterator if id(c) not in self._incompatible_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 47, in _iter_built
candidate = func()
^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\resolution\resolvelib\factory.py", line 206, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 293, in init
super().init(
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 156, in init
self.dist = self._prepare()
^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 225, in _prepare
dist = self._prepare_distribution()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 304, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\operations\prepare.py", line 538, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\operations\prepare.py", line 653, in _prepare_linked_requirement
dist = _get_prepared_distribution(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\operations\prepare.py", line 69, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\distributions\sdist.py", line 48, in prepare_distribution_metadata
self._install_build_reqs(finder)
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\distributions\sdist.py", line 118, in _install_build_reqs
build_reqs = self._get_build_requires_wheel()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\distributions\sdist.py", line 95, in _get_build_requires_wheel
return backend.get_requires_for_build_wheel()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_internal\utils\misc.py", line 697, in get_requires_for_build_wheel
return super().get_requires_for_build_wheel(config_settings=cs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_vendor\pyproject_hooks_impl.py", line 166, in get_requires_for_build_wheel
return self._call_hook('get_requires_for_build_wheel', {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip_vendor\pyproject_hooks_impl.py", line 321, in _call_hook
raise BackendUnavailable(data.get('traceback', ''))
pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last):
File "C:\Program Files\Python312\Lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 77, in build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\importlib_init
.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1381, in _gcd_import
File "", line 1354, in _find_and_load
File "", line 1304, in _find_and_load_unlocked
File "", line 488, in _call_with_frames_removed
File "", line 1381, in _gcd_import
File "", line 1354, in _find_and_load
File "", line 1325, in _find_and_load_unlocked
File "", line 929, in _load_unlocked
File "", line 994, in exec_module
File "", line 488, in call_with_frames_removed
File "C:\Users\suao1\AppData\Local\Temp\pip-build-env-6rc03cez\overlay\Lib\site-packages\setuptools_init
.py", line 10, in
import distutils.core
ModuleNotFoundError: No module named 'distutils'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.