Welcome to the MyGirlGPT repository. This project allows you to build your personalized AI girlfriend with a unique personality, voice, and even selfies. The AI girlfriend runs on your personal server, giving you complete control and privacy.
MyGirlGPT is AI agent deploy on the OpenDAN-Personal-AI-OS
Click the image below to watch a demo:
Subscribe to updates here: https://twitter.com/SynthIntel2023
Join us on Telegram to chat with Cherry and be part of the MyGirlGPT community! Click MyGirlGPTCommunity to join.
- In the group, use
@mygirlgpt_bot
to talk to Cherry. - Want to hear Cherry's voice? Just use the /voice command to switch it on or off.
- want a selfie? Just send "send me a pic" or "send me your selfie", and it'll be on its way!
The Architecture of Project
- TelegramBot
- bot: Receive messages from Telegram, and send messages to mygirl.
- mygirl: Process the message and send it to the LLM Server. If text-to-speech conversion is required, call the TTS Server.
- LLM Server: As the brain of the AI girlfriend, generates reply messages. If it is determined that a message is required by the user, call the stable diffusion webui API to generate an image.
- TTS Server: Provide text-to-speech capabilities.
- text2img Server: Use stable diffusion webui API to provide text2img capabilities.
- Start the Stable Diffusion Webui
Start with the--api
argument. If you're deploying the service across multiple devices, you'll also need to add the--listen
argument. The SD Webui will now be listening on port7860
.
You'll have your configuration:SD_ADDRESS='http://stablediffusion:7860'
, this will be used in the next step. - Start the LLM Server
Follow the instructions outlined in the How to run LLM Server. Once the server is running.The LLM Server will be running on port5001
. - Start the TTS Server
Follow the instructions outlined in the How to run TTS Server. Once the server is running, it will be listening on port6006
. - Start the TelegramBot
You should now have theGPT_SERVER=http://LLM-SERVER:5001
andTTS_SERVER=http://TTS-SREVER:6006
.
Follow the instructions outlined in the How to run TelegramBot to start the bot.
Now you can have fun chatting with your AI girl!!!
- Telegram Integration: Connect directly with your AI girlfriend through Telegram, allowing you to send and receive messages seamlessly.
- Local Large Language Model (LLM): Powered by text-generation-webui with better privacy protection.
- Personality Customization: Tailor the AI's personality to your preferences, making her a perfect match for you. The model is TehVenom/Pygmalion-Vicuna-1.1-7b
- Voice Generation: Utilize Bark to generate a voice for your AI model, enhancing the immersive experience.
- Selfie Generation: Your AI girlfriend is capable of generating photorealistic selfies upon request, powered by Stable Diffusion web UI.
- Long-Term Memory: Enable MyGirlGPT to "remember" conversations long-term, which will enhance the depth and continuity of your interactions.
- Video Messages: Your AI girlfriend will be able to send you videos of herself, providing a more immersive and engaging experience.
- Discord Bot: Connect your AI girlfriend to Discord, expanding the platforms where you can interact with her.
- LLM for SD prompts: Replacing GPT-3.5 with a local LLM to generate prompts for SD.
- Switch Personality: Allow users to switch between different personalities for AI girlfriend, providing more variety and customization options for the user experience.
-
Q: How much vram would you recommend to run this locally?
A: The system requires approximately 36GB VRAM, with 15-17GB for the LLM Server, 7GB for the TTS Server, and 11GB for the stable diffusion webui.
-
Q: Why cherry refuses but still send pictures?
A: Sending pictures depends on your message, and for now we won't consider Cherry's opinion. So what you see is her rejecting your request, but you will still receive the photos. Next version, send picture will base on cherry's opition. Once Cherry says no, you will not get the picture. This will make her like human more.
-
Q: How do we set it up on a server and make it run 24/7 ?
A: I will write a deployment guide for the project.
-
Q: Can I run multiple parts of the system across multiple devices? (LLM server in one device, TTS in another and stable diffusion in a web server) is that possible?
A: Yes you can run multiple parts of the system across multiple devices. Right now I have a A5000 for LLM & TTS, a 3090 for stable diffusion
-
Q: Can I connect my existing stable diffusion with this? Or require a dedicated instance?
A: Yes, you can use the exsiting stable diffusion, just make sure to add
--api
to the args.
We welcome pull requests. If you plan to make significant changes, please open an issue first to discuss them.
This project is licensed under the MIT License.