Git Product home page Git Product logo

w-ai-fu's People

Contributors

realpack avatar waifu-dev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

w-ai-fu's Issues

Infinite update loop

Im trying to update from 1.2.2. to 1.2.6 but I run into a loop. update completes and replaces files, but always updates to 1.2.5 instead. each time i run

result: run > update to 1.2.6? > y > update complete > run shortcut > you are on 1.2.5 , upgrade to 1.2.6? > y > update complete > run shortcut > you are on 1.2.5, upgrade to 1.2.6? > y > update complete > (repeat)

I have tried downloading the zip directly and trying a fresh install but I still get to the same result as above. any suggestions? :/

(BTW LOVE THE PROJECT!)

Error after working for one response (SyntaxError: Unexpected end of JSON input)

It will send one chat response but then the web UI will show a prompt that reads "Could not reach the w-AI-fu application, it may have been closed.'" and then no responses will work. I also didn't hear an audio output and used play.ht and made sure to select it under TTS provider. This app is great though, is it possible to use your own local model like llama.cpp? Thank you!

Here is what the console window read:

Hello?
Hilda: Hey User, welcome back. How are things going? Is everything alright? If anything happens please contact me immediately via email. Also feel free to send me any feedback regarding this interview.
undefined:1

SyntaxError: Unexpected end of JSON input
at JSON.parse ()
at C:\Users\Desktop\w-AI-fu-main\w-AI-fu-main\w-AI-fu\app.js:762:27
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

Node.js v18.15.0
Press any key to continue . . .

AI keep repeating itself

the AI kept repeating the greeting message for every question (see a an example convo attached)
image

Novel.AI LLM

This was really amazing I'd like to thank you for this!
I find that using NovelAI as the Language Model, the replies are very hit and miss for conversational situations. more often than not it's just too blunt or completely nonsensical.
Are there any plans to add any other options apart from NovelAI? for example something like CharacterAI.

node variables not found

Discussed in #41

Originally posted by FMRadio5555 September 28, 2023
I'm not sure what I'm doing wrong, the waifu file cant seem to find the node js env variable.

node has installed, python installed, yet somehow none can be found. I reviewed the video provided that shows step by step but in the video, the waifu file installs all independencies it requires, yet when I do it, it just says

"Installing nodejs dependencies ...
Could not find node env variable, trying with direct path
Could not find an installed NodeJS environment. Please install NodeJS (prefer v19.8.1) from the official website: https://nodejs.org/en/download/releases
Press any key to continue . . ."

Where did I go wrong? closes version I can get is NodeJS is v19.9.0

I apologies if this isn't the place to discuss this matter, but I'm not quite sure where to ask or who to ask, so here I am, if there is a place to properly ask this, I can delete this and take it somewhere else if need be.

thank you for your time

run error after install

w-AI-fu 1.2.9
Loading config informations ...
Loading character "Hilda" ...
Loading filter ...
Getting audio devices ...
Spawning subprocesses ...
python TTS:
�׼��� ���ѿ� ���� ������ ���Ͽ� �׼����� �õ��߽��ϴ�

Loaded LLM.
Critical Error: Could not contact TTS python script after 10s
Killing subprocesses ...
Exiting w.AI.fu

how could i fix it?

RuntimeError: Event loop is closed

Hello, was just letting the program run and after about 10 minutes this error came up,

python TTS:
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x000002951F0BC3A0>

python TTS:
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x000002951F015A80>, 16785.234)]']
connector: <aiohttp.connector.TCPConnector object at 0x000002951F0BC820>

python TTS:
Fatal error on SSL transport
protocol: <asyncio.sslproto.SSLProtocol object at 0x000002951F0BC340>
transport: <_ProactorSocketTransport fd=-1 read=<_OverlappedFuture cancelled>>
Traceback (most recent call last):
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 690, in _process_write_backlog
self._transport.write(chunk)
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 365, in write
self._loop_writing(data=bytes(data))
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 401, in _loop_writing
self._write_fut = self._loop._proactor.send(self._sock, data)
AttributeError: 'NoneType' object has no attribute 'send'

python TTS:
Exception ignored in: <function _SSLProtocolTransport.del at 0x00000295189D6A70>
Traceback (most recent call last):
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 321, in del
self.close()
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 316, in close

python TTS:
self._ssl_protocol._start_shutdown()
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 599, in _start_shutdown
self._write_appdata(b'')
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 604, in _write_appdata

python TTS:
self._process_write_backlog()
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 712, in _process_write_backlog

python TTS:
self._fatal_error(exc, 'Fatal error on SSL transport')
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\sslproto.py", line 726, in _fatal_error
self._transport._force_close(exc)
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 152, in _force_close

python TTS:
self._loop.call_soon(self._call_connection_lost, exc)
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 753, in call_soon

python TTS:
self._check_closed()
File "C:\Users\MissS\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 515, in _check_closed

python TTS:
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed

TypeError: cannot read properties of undefined (reading '0')

Screenshot (7167)

hi, hey i have an error, when i type in the Twitch Chat it closes the program saying this: TypeError: cannot read properties of undefined (reading '0') and i already added my Twitch Name, Oauth and all the other stuff necessary but it still closes the app when i type something in the Twitch Chat.

Error: Received incorrect json from LLM

Hello everyone!
I have a problem with Novel AI, I specify the correct login information, as a result there is no response from the server, I created another account, such a problem.
What could be the reason?
2024-06-08_17-07-26

Error: Received incorrect json from TTS.

I tried to change the audio outputs but now I get no sound and it keeps giving me this 500 Internal Server Error when I load the program. I tried going back to the default sound devices but I get this error regardless of what sound devices I choose. I also tried running install.bat and cmd install py requirements.txt
Here are the results from running in test:

Entering TEST mode ...
Checking LLM response ...
Closed LLM.
passed.
Checking TTS response ...
Error: Received incorrect json from TTS.
<!doctype html>

<title>500 Internal Server Error</title>

Internal Server Error

The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

Closed TTS.
passed.
Checking CHAT response ...
Closed CHAT.
passed.
Checking Text Input ...
passed.
Checking Voice Input ...
passed.
Successfuly passed all tests.
Killing subprocesses ...
Exiting w.AI.fu
Press any key to continue . . .

no such file or directory, open './devices/devices.json'

How to fix this error?

w-AI-fu 1.2.2
Loading config informations ...
Loading character "Hilda" ...
Loading filter ...
Getting audio devices ...
node:fs:601
handleErrorFromBinding(ctx);
^

Error: ENOENT: no such file or directory, open './devices/devices.json'
at Object.openSync (node:fs:601:3)
at Object.readFileSync (node:fs:469:35)
at getDevices (C:\Users\LINUXFY\Downloads\w-AI-fu main\w-AI-fu\app.js:1197:21)
at init (C:\Users\LINUXFY\Downloads\w-AI-fu main\w-AI-fu\app.js:356:5)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async main (C:\Users\LINUXFY\Downloads\w-AI-fu main\w-AI-fu\app.js:258:5) {
errno: -4058,
syscall: 'open',
code: 'ENOENT',
path: './devices/devices.json'
}

Node.js v18.16.0
Press any key to continue . . .

Python LLM and TTS issues

Discussed in #43

Originally posted by SpnkThe October 3, 2023
Hello, so i have a small issue with setting up this program.
After successful installation of nodejs and python dependencies with install.bat, run.bat file gives out a error:

w-AI-fu 1.2.9
Loading config informations ...
Loading character "Hilda" ...
Loading filter ...
Getting audio devices ...
Spawning subprocesses ...
python LLM:
Traceback (most recent call last):
  File "C:\Users\jarek\Desktop\w-AI-fu main\w-AI-fu\novel\novel_llm.py", line 17, in <module>

python LLM:
    from flask import Flask, request, jsonify
  File "C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask\__init__.py", line 5, in <module>
    from .app import Flask as Flask
  File "C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask\app.py", line 30, in <module>
    from werkzeug.urls import url_quote
ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\werkzeug\urls.py)

python TTS:
Traceback (most recent call last):
  File "C:\Users\jarek\Desktop\w-AI-fu main\w-AI-fu\novel\novel_tts.py", line 10, in <module>

python TTS:
    from flask import Flask, request, jsonify, make_response
  File "C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask\__init__.py", line 5, in <module>

python TTS:
    from .app import Flask as Flask
  File "C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\flask\app.py", line 30, in <module>
    from werkzeug.urls import url_quote
ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (C:\Users\jarek\AppData\Local\Programs\Python\Python311\Lib\site-packages\werkzeug\urls.py)

Critical Error: Could not contact LLM python script after 10s
Killing subprocesses ...
Exiting w.AI.fu
Press any key to continue . . .

It's my first time being face to face with coding and at this point i have no clue how this could be fixed, any help?
My current python version is 3.11.4 (but i had same error on recomended 3.10.10 version which i deleted entirely)
My nodeJS version is 19.8.1

ImportError: DLL load failed

when i open the shortcut to boot up the cmd i get a error for python TTS and python LLM i what should i do and here is a screen shot of the error
error2

error

AssignProcessToJobObject: (87) The parameter is incorrect.

The issue caused when I run the program.
And this one is command prompt error
image

If I run the shortcut with administrator, it will be like this.
image

Seem like the run.bat see the node app.js is in system32 but it not.

P/S: I have checked the main.js file
image

I have NodeJs 20.3.1 and Python 3.10.11 installed

Crash

w-AI-fu 1.2.9
Loading config informations ...
Loading character "Elai" ...
Getting audio devices ...
Spawning subprocesses ...
Loaded LLM.
Loaded TTS.
Connecting to the Twitch API ...
Obtained Twitch UID
Starting WebUI ...
Loaded w-AI-fu.

Commands: !mode [text, voice], !say [...], !script [_.txt], !chat [on, off], !history, !char, !reset, !stop, !save, !debug, !reload
> Successfully connected to Twitch EventSub WebSocket.
Received Auth token from Twitch API
Closed Twitch Chat WebSocket with message: 1006
Killing subprocesses ...
Closed LLM.
Closed TTS.
Reinitializing ...
Getting audio devices ...
Closed Twitch Events WebSocket with message: 1000 client disconnected
Loaded LLM.
Loaded TTS.
Connecting to the Twitch API ...
Obtained Twitch UID
Killing subprocesses ...
Closed LLM.
Closed TTS.
Reinitializing ...
Getting audio devices ...
node:events:492
      throw er; // Unhandled 'error' event
      ^

Error: WebSocket was closed before the connection was established
    at WebSocket.close (C:\Neyro\w-AI-fu main\w-AI-fu\node_modules\ws\lib\websocket.js:285:7)
    at closeSubProcesses (C:\Neyro\w-AI-fu main\w-AI-fu\app.js:1113:33)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async handleCommand (C:\Neyro\w-AI-fu main\w-AI-fu\app.js:809:13)
    at async main (C:\Neyro\w-AI-fu main\w-AI-fu\app.js:286:15)
Emitted 'error' event on WebSocket instance at:
    at emitErrorAndClose (C:\Neyro\w-AI-fu main\w-AI-fu\node_modules\ws\lib\websocket.js:1008:13)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)

Node.js v20.8.0

Critical Error: Could not contact LLM python script after 10s

Hi, excuse me but i have this problem. I already did steps 1 to 4 but when i open the "Run w-AI-fu" shortcut it says this:

Traceback (most recent call last):
File "C:\Users\sonic\Desktop\w-AI-fu-1.2.2 Vtuber\w-AI-fu\novel\novel_tts.py", line 3, in

python TTS:
import pyaudio
ModuleNotFoundError: No module named 'pyaudio'

python LLM:
Traceback (most recent call last):
File "C:\Users\sonic\Desktop\w-AI-fu-1.2.2 Vtuber\w-AI-fu\novel\novel_llm.py", line 6, in

python LLM:
from boilerplate import API
File "C:\Users\sonic\Desktop\w-AI-fu-1.2.2 Vtuber\w-AI-fu\novel\boilerplate.py", line 5, in

python LLM:
from aiohttp import ClientSession
ModuleNotFoundError: No module named 'aiohttp'

Critical Error: Could not contact LLM python script after 10s

Critical Error: Could not contact CHAT python script after 10s

Everything has been working fine since 1.2.2 update but now it won't open the browser page and reads: Loaded LLM.
Loaded TTS.
Critical Error: Could not contact CHAT python script after 10s
Killing subprocesses ...
Exiting w.AI.fu
Press any key to continue . . .

I tried opening a second run.bat which usually solves the error but this time it didn't work. Maybe you're already aware of this but I thought I'd post an issue just in case.

Thank you

Python 3.12 (and later) support

Hi, currently w-AI-fu has several dependencies that require Python 3.11 or less.

Is there any timeline or interest in updating these dependencies?

Sincerly, Lars

Chat reply picking strategy

If there's a lot of chat messages, allow adjustments on who gets a reply (or else the bot will make a very long queue of for replies).

Rule 1. Toggle Prioritize by roles: VIP > Donator/Bits > Subscribers by tier > Followers > Regular > First time. Use weighted random choice further configurable in config.json, in python its looks like this: https://www.geeksforgeeks.org/how-to-get-weighted-random-choice-in-python/

Rule 2. Toggle Prioritize by chat frequency/scoreboard: after applying Rule 1, pick the user who was last replied to by the bot based on the scoreboard.

Rule 3. Toggle Prioritize Questions: after evaluating Rule 1-2, give weight to picking chat messages that are classified as questions. see: https://github.com/huggingface/node-question-answering, alternatively when using scoreboards - penalize users spamming with emotes.

Contuined Critical Error: Received incorrect json from LLM issue

So i was able to get it working, and i had connected the twitch integration, and upon chat typing i received
Critical Error: Received incorrect json from LLM.
<!doctype html>

<title>500 Internal Server Error</title>

Internal Server Error

The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

When reloaded, it will keep showing this with the same error, unable to use afterwards

Issue with LLM response 'invalid json'

it keeps saying 'Could not reach the w-AI-fu application, it may have been closed.' whilst the program was still open, and the program says 'received incorrect json from llm' it also mentioned a system overload or an error

Twitch connection

Trying to connect my twitch. following the README.txt, everything says the clientId and Secret is optional however i cannot connect to my twitch chat with the Twitch Chat OAuth Password and username alone. I have went into the Twitch dev tools to create a Client ID and Client secret, but as im not a dev and dont actually have a OAuth Redirect URL to provicde i cannot create the whats seeming non optional thing to connect my twitch chat. Below is the errors i receive in the wAIfu console.

Killing subprocesses ...
Closed LLM.
Closed TTS.
Reinitializing ...
Getting audio devices ...
Closed Twitch Chat WebSocket with message: 1005
Loaded LLM.
Loaded TTS.
Connecting to the Twitch API ...
Failed to get Twitch App Access Token. This may be due to incorrect Twitch App ClientId or Secret.
Could not connect to Twitch EventSub.
w-AI-fu will continue without reading follows, subs and bits.
Follow this tutorial to enable the feature: https://github.com/wAIfu-DEV/w-AI-fu/wiki/Follower,-Subscribers,-Bits-interactions
Killing subprocesses ...
Closed LLM.
Closed TTS.
Reinitializing ...
Getting audio devices ...
Closed Twitch Chat WebSocket with message: 1005
Loaded LLM.
Loaded TTS.
Connecting to the Twitch API ...
Failed to get Twitch App Access Token. This may be due to incorrect Twitch App ClientId or Secret.
Could not connect to Twitch EventSub.
w-AI-fu will continue without reading follows, subs and bits.
Follow this tutorial to enable the feature: https://github.com/wAIfu-DEV/w-AI-fu/wiki/Follower,-Subscribers,-Bits-interactions
Killing subprocesses ...
Closed LLM.
Closed TTS.
Reinitializing ...
Getting audio devices ...
Critical Error:WebSocket was closed before the connection was establishedKilling subprocesses ...
Exiting w.AI.fu
Press any key to continue . . .

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.