waifu-dev / w-ai-fu Goto Github PK
View Code? Open in Web Editor NEWTalk with or stream using an AI Vtuber [REQUIRES NOVELAI] blazing fast 🔥 READ THE README
License: GNU General Public License v3.0
Talk with or stream using an AI Vtuber [REQUIRES NOVELAI] blazing fast 🔥 READ THE README
License: GNU General Public License v3.0
help pls
Im trying to update from 1.2.2. to 1.2.6 but I run into a loop. update completes and replaces files, but always updates to 1.2.5 instead. each time i run
result: run > update to 1.2.6? > y > update complete > run shortcut > you are on 1.2.5 , upgrade to 1.2.6? > y > update complete > run shortcut > you are on 1.2.5, upgrade to 1.2.6? > y > update complete > (repeat)
I have tried downloading the zip directly and trying a fresh install but I still get to the same result as above. any suggestions? :/
(BTW LOVE THE PROJECT!)
It will send one chat response but then the web UI will show a prompt that reads "Could not reach the w-AI-fu application, it may have been closed.'" and then no responses will work. I also didn't hear an audio output and used play.ht and made sure to select it under TTS provider. This app is great though, is it possible to use your own local model like llama.cpp? Thank you!
Here is what the console window read:
Hello?
Hilda: Hey User, welcome back. How are things going? Is everything alright? If anything happens please contact me immediately via email. Also feel free to send me any feedback regarding this interview.
undefined:1
SyntaxError: Unexpected end of JSON input
at JSON.parse ()
at C:\Users\Desktop\w-AI-fu-main\w-AI-fu-main\w-AI-fu\app.js:762:27
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Node.js v18.15.0
Press any key to continue . . .
This might be due to an oversight in the reload process, gonna check it out
I am betting on a dependency or python install issue but not sure rn
This was really amazing I'd like to thank you for this!
I find that using NovelAI as the Language Model, the replies are very hit and miss for conversational situations. more often than not it's just too blunt or completely nonsensical.
Are there any plans to add any other options apart from NovelAI? for example something like CharacterAI.
Originally posted by FMRadio5555 September 28, 2023
I'm not sure what I'm doing wrong, the waifu file cant seem to find the node js env variable.
node has installed, python installed, yet somehow none can be found. I reviewed the video provided that shows step by step but in the video, the waifu file installs all independencies it requires, yet when I do it, it just says
"Installing nodejs dependencies ...
Could not find node env variable, trying with direct path
Could not find an installed NodeJS environment. Please install NodeJS (prefer v19.8.1) from the official website: https://nodejs.org/en/download/releases
Press any key to continue . . ."
Where did I go wrong? closes version I can get is NodeJS is v19.9.0
I apologies if this isn't the place to discuss this matter, but I'm not quite sure where to ask or who to ask, so here I am, if there is a place to properly ask this, I can delete this and take it somewhere else if need be.
thank you for your time
w-AI-fu 1.2.9
Loading config informations ...
Loading character "Hilda" ...
Loading filter ...
Getting audio devices ...
Spawning subprocesses ...
python TTS:
���� ���ѿ� ���� ������ ���Ͽ� ������ �õ��߽��ϴ�
Loaded LLM.
Critical Error: Could not contact TTS python script after 10s
Killing subprocesses ...
Exiting w.AI.fu
how could i fix it?
I accepted the update today but I'm getting this error in the console: AssignProcessToJobObject: (87) The parameter is incorrect.
and the "app can't be reached" dialogue window.
This was working before I updated it however.
Hello, was just letting the program run and after about 10 minutes this error came up,
python TTS:
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x000002951F0BC3A0>
python TTS:
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x000002951F015A80>, 16785.234)]']
connector: <aiohttp.connector.TCPConnector object at 0x000002951F0BC820>
python TTS:
Fatal error on SSL transport
protocol: <asyncio.sslproto.SSLProtocol object at 0x000002951F0BC340>
transport: <_ProactorSocketTransport fd=-1 read=<_OverlappedFuture cancelled>>
Traceback (most recent call last):
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 690, in _process_write_backlog
self._transport.write(chunk)
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 365, in write
self._loop_writing(data=bytes(data))
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 401, in _loop_writing
self._write_fut = self._loop._proactor.send(self._sock, data)
AttributeError: 'NoneType' object has no attribute 'send'
python TTS:
Exception ignored in: <function _SSLProtocolTransport.del at 0x00000295189D6A70>
Traceback (most recent call last):
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 321, in del
self.close()
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 316, in close
python TTS:
self._ssl_protocol._start_shutdown()
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 599, in _start_shutdown
self._write_appdata(b'')
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 604, in _write_appdata
python TTS:
self._process_write_backlog()
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 712, in _process_write_backlog
python TTS:
self._fatal_error(exc, 'Fatal error on SSL transport')
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 726, in _fatal_error
self._transport._force_close(exc)
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 152, in _force_close
python TTS:
self._loop.call_soon(self._call_connection_lost, exc)
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 753, in call_soon
python TTS:
self._check_closed()
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 515, in _check_closed
python TTS:
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
I tried to change the audio outputs but now I get no sound and it keeps giving me this 500 Internal Server Error when I load the program. I tried going back to the default sound devices but I get this error regardless of what sound devices I choose. I also tried running install.bat and cmd install py requirements.txt
Here are the results from running in test:
Entering TEST mode ...
Checking LLM response ...
Closed LLM.
passed.
Checking TTS response ...
Error: Received incorrect json from TTS.
<!doctype html>
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
Closed TTS.
passed.
Checking CHAT response ...
Closed CHAT.
passed.
Checking Text Input ...
passed.
Checking Voice Input ...
passed.
Successfuly passed all tests.
Killing subprocesses ...
Exiting w.AI.fu
Press any key to continue . . .
How to fix this error?
w-AI-fu 1.2.2
Loading config informations ...
Loading character "Hilda" ...
Loading filter ...
Getting audio devices ...
node:fs:601
handleErrorFromBinding(ctx);
^
Error: ENOENT: no such file or directory, open './devices/devices.json'
at Object.openSync (node:fs:601:3)
at Object.readFileSync (node:fs:469:35)
at getDevices (C:\Users\LINUXFY\Downloads\w-AI-fu main\w-AI-fu\app.js:1197:21)
at init (C:\Users\LINUXFY\Downloads\w-AI-fu main\w-AI-fu\app.js:356:5)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async main (C:\Users\LINUXFY\Downloads\w-AI-fu main\w-AI-fu\app.js:258:5) {
errno: -4058,
syscall: 'open',
code: 'ENOENT',
path: './devices/devices.json'
}
Node.js v18.16.0
Press any key to continue . . .
Originally posted by SpnkThe October 3, 2023
Hello, so i have a small issue with setting up this program.
After successful installation of nodejs and python dependencies with install.bat
, run.bat
file gives out a error:
w-AI-fu 1.2.9
Loading config informations ...
Loading character "Hilda" ...
Loading filter ...
Getting audio devices ...
Spawning subprocesses ...
python LLM:
Traceback (most recent call last):
File "C:\Users\jarek\Desktop\w-AI-fu main\w-AI-fu\novel\novel_llm.py", line 17, in <module>
python LLM:
from flask import Flask, request, jsonify
File "C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask\__init__.py", line 5, in <module>
from .app import Flask as Flask
File "C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask\app.py", line 30, in <module>
from werkzeug.urls import url_quote
ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\werkzeug\urls.py)
python TTS:
Traceback (most recent call last):
File "C:\Users\jarek\Desktop\w-AI-fu main\w-AI-fu\novel\novel_tts.py", line 10, in <module>
python TTS:
from flask import Flask, request, jsonify, make_response
File "C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask\__init__.py", line 5, in <module>
python TTS:
from .app import Flask as Flask
File "C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask\app.py", line 30, in <module>
from werkzeug.urls import url_quote
ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\werkzeug\urls.py)
Critical Error: Could not contact LLM python script after 10s
Killing subprocesses ...
Exiting w.AI.fu
Press any key to continue . . .
It's my first time being face to face with coding and at this point i have no clue how this could be fixed, any help?
My current python version is 3.11.4 (but i had same error on recomended 3.10.10 version which i deleted entirely)
My nodeJS version is 19.8.1
It does not fetch messages from twitch, do you have any solution for that?
w-AI-fu 1.2.9
Loading config informations ...
Loading character "Elai" ...
Getting audio devices ...
Spawning subprocesses ...
Loaded LLM.
Loaded TTS.
Connecting to the Twitch API ...
Obtained Twitch UID
Starting WebUI ...
Loaded w-AI-fu.
Commands: !mode [text, voice], !say [...], !script [_.txt], !chat [on, off], !history, !char, !reset, !stop, !save, !debug, !reload
> Successfully connected to Twitch EventSub WebSocket.
Received Auth token from Twitch API
Closed Twitch Chat WebSocket with message: 1006
Killing subprocesses ...
Closed LLM.
Closed TTS.
Reinitializing ...
Getting audio devices ...
Closed Twitch Events WebSocket with message: 1000 client disconnected
Loaded LLM.
Loaded TTS.
Connecting to the Twitch API ...
Obtained Twitch UID
Killing subprocesses ...
Closed LLM.
Closed TTS.
Reinitializing ...
Getting audio devices ...
node:events:492
throw er; // Unhandled 'error' event
^
Error: WebSocket was closed before the connection was established
at WebSocket.close (C:\Neyro\w-AI-fu main\w-AI-fu\node_modules\ws\lib\websocket.js:285:7)
at closeSubProcesses (C:\Neyro\w-AI-fu main\w-AI-fu\app.js:1113:33)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async handleCommand (C:\Neyro\w-AI-fu main\w-AI-fu\app.js:809:13)
at async main (C:\Neyro\w-AI-fu main\w-AI-fu\app.js:286:15)
Emitted 'error' event on WebSocket instance at:
at emitErrorAndClose (C:\Neyro\w-AI-fu main\w-AI-fu\node_modules\ws\lib\websocket.js:1008:13)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21)
Node.js v20.8.0
will have to add more error checking
Hi, excuse me but i have this problem. I already did steps 1 to 4 but when i open the "Run w-AI-fu" shortcut it says this:
Traceback (most recent call last):
File "C:\Users\sonic\Desktop\w-AI-fu-1.2.2 Vtuber\w-AI-fu\novel\novel_tts.py", line 3, in
python TTS:
import pyaudio
ModuleNotFoundError: No module named 'pyaudio'
python LLM:
Traceback (most recent call last):
File "C:\Users\sonic\Desktop\w-AI-fu-1.2.2 Vtuber\w-AI-fu\novel\novel_llm.py", line 6, in
python LLM:
from boilerplate import API
File "C:\Users\sonic\Desktop\w-AI-fu-1.2.2 Vtuber\w-AI-fu\novel\boilerplate.py", line 5, in
python LLM:
from aiohttp import ClientSession
ModuleNotFoundError: No module named 'aiohttp'
Critical Error: Could not contact LLM python script after 10s
This happens out-of-the-box when the application is not correctly reloaded after changes to the credentials.
OH, before this is closed. since you mentioned the config, the config is having trouble saving my name under username. It will always default back to "USER" instead of saving the name. idk if it has to do with the update loop. (hope this helps)
Originally posted by @newbornpowersource in #29 (comment)
Everything has been working fine since 1.2.2 update but now it won't open the browser page and reads: Loaded LLM.
Loaded TTS.
Critical Error: Could not contact CHAT python script after 10s
Killing subprocesses ...
Exiting w.AI.fu
Press any key to continue . . .
I tried opening a second run.bat which usually solves the error but this time it didn't work. Maybe you're already aware of this but I thought I'd post an issue just in case.
Thank you
pls help i dont know how to do that....
I get this error whenever I run the shortcut. Then once it is launched a browser pops up saying it couldn't reach the application. And i launched it using administrator privileges and it says it cant find app.js
Hi, currently w-AI-fu has several dependencies that require Python 3.11 or less.
Is there any timeline or interest in updating these dependencies?
Sincerly, Lars
If there's a lot of chat messages, allow adjustments on who gets a reply (or else the bot will make a very long queue of for replies).
Rule 1. Toggle Prioritize by roles: VIP > Donator/Bits > Subscribers by tier > Followers > Regular > First time. Use weighted random choice further configurable in config.json, in python its looks like this: https://www.geeksforgeeks.org/how-to-get-weighted-random-choice-in-python/
Rule 2. Toggle Prioritize by chat frequency/scoreboard: after applying Rule 1, pick the user who was last replied to by the bot based on the scoreboard.
Rule 3. Toggle Prioritize Questions: after evaluating Rule 1-2, give weight to picking chat messages that are classified as questions. see: https://github.com/huggingface/node-question-answering, alternatively when using scoreboards - penalize users spamming with emotes.
So i was able to get it working, and i had connected the twitch integration, and upon chat typing i received
Critical Error: Received incorrect json from LLM.
<!doctype html>
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
When reloaded, it will keep showing this with the same error, unable to use afterwards
While the TTS has a timeout, the LLM doesn't which means it can keep hanging for a while before the HTTP timeout kicks in.
It is usually caused by the NovelAI API not responding, so the best thing the program can/should do is handle it somewhat gracefully.
ImportError: DLL load failed while importing _sentencepiece: �� ������ ��������� ������.
Critical Error: Could not contact LLM python script after 10s
Implement a needs system similar to The Sims https://www.youtube.com/watch?v=9gf2MT-IOsg
Gonna have to investigate that one
it keeps saying 'Could not reach the w-AI-fu application, it may have been closed.' whilst the program was still open, and the program says 'received incorrect json from llm' it also mentioned a system overload or an error
Is it possible to incorporate local TTS models into this project, so there is no reliance on paid services like NovelAI?
Would probably be easier to use TTS models that have an API, but there are other TTS using python.
CoquiTTS
Things like Tortoise TTS: https://github.com/jnordberg/tortoise-tts
Wave RNN: https://github.com/fatchord/WaveRNN
Tacotron2: https://github.com/Rayhane-mamah/Tacotron-2
Trying to connect my twitch. following the README.txt, everything says the clientId and Secret is optional however i cannot connect to my twitch chat with the Twitch Chat OAuth Password and username alone. I have went into the Twitch dev tools to create a Client ID and Client secret, but as im not a dev and dont actually have a OAuth Redirect URL to provicde i cannot create the whats seeming non optional thing to connect my twitch chat. Below is the errors i receive in the wAIfu console.
Killing subprocesses ...
Closed LLM.
Closed TTS.
Reinitializing ...
Getting audio devices ...
Closed Twitch Chat WebSocket with message: 1005
Loaded LLM.
Loaded TTS.
Connecting to the Twitch API ...
Failed to get Twitch App Access Token. This may be due to incorrect Twitch App ClientId or Secret.
Could not connect to Twitch EventSub.
w-AI-fu will continue without reading follows, subs and bits.
Follow this tutorial to enable the feature: https://github.com/wAIfu-DEV/w-AI-fu/wiki/Follower,-Subscribers,-Bits-interactions
Killing subprocesses ...
Closed LLM.
Closed TTS.
Reinitializing ...
Getting audio devices ...
Closed Twitch Chat WebSocket with message: 1005
Loaded LLM.
Loaded TTS.
Connecting to the Twitch API ...
Failed to get Twitch App Access Token. This may be due to incorrect Twitch App ClientId or Secret.
Could not connect to Twitch EventSub.
w-AI-fu will continue without reading follows, subs and bits.
Follow this tutorial to enable the feature: https://github.com/wAIfu-DEV/w-AI-fu/wiki/Follower,-Subscribers,-Bits-interactions
Killing subprocesses ...
Closed LLM.
Closed TTS.
Reinitializing ...
Getting audio devices ...
Critical Error:WebSocket was closed before the connection was establishedKilling subprocesses ...
Exiting w.AI.fu
Press any key to continue . . .
Saying it cant contact the llm script and boots me out the program
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.