nomic-ai / gpt4all-chat Goto Github PK
View Code? Open in Web Editor NEWgpt4all-j chat
License: Other
gpt4all-j chat
License: Other
So everything downloaded and installed correctly.
But when I went to run the chat binary I got this error message:
QML debugging is enabled. Only use this in a safe environment.
gptj_model_load: loading model from 'ggml-gpt4all-j.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
[1] 1035799 illegal hardware instruction (core dumped) ./chat
I'm running arch linux.
Kernel Version: 6.2.10
Any help would be appreciated, thank you!
https://github.com/manyoso/gpt4all-chat#license
The source code of this chat interface is currently under a GPL license.
The underlying GPT4All-j model is released under non-restrictive open-source Apache 2 License.
Installer successful, generated Applications/gpt4all/bin/gpt4all.app "does not run on system":
16" MacbookPro 2019 (Intel i7)
Ventura 13.2.1
I guess it only runs on M1/M2?
Names of files and error message are similar from what I remember (already deleted everything).
Does not work in rhel 9.1 / rocky 9.1
/opt/gpt4all\ 0.1.0/bin/chat
/opt/gpt4all 0.1.0/bin/chat: error while loading shared libraries: libjpeg.so.8: cannot open shared object file: No such file or directory
Kindly include all the dependencies in /opt/gpt4all\ 0.1.0/lib/
for it to work.
[root@localhost jpeg-8d]# ldd /opt/gpt4all\ 0.1.0/bin/chat |grep -i not
/opt/gpt4all 0.1.0/bin/chat: /lib64/libm.so.6: version `GLIBC_2.35' not found (required by /opt/gpt4all 0.1.0/bin/chat)
/opt/gpt4all 0.1.0/bin/chat: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /opt/gpt4all 0.1.0/bin/chat)
libjpeg.so.8 => not found
libxcb-cursor.so.0 => not found
I'm attempting to install it on a 2019 Intel MacBook Pro. I tried to download the Mac Installer file, but it would start and quit. Tried several times and only got a partial download. So I tried the manual install. I downloaded and installed Qt and Cmake. I got as far as executing "cmake .." from the build folder, and I get the following error:
`CMake Error at CMakeLists.txt:9 (find_package):
By not providing "FindQt6.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "Qt6", but
CMake did not find one.
Could not find a package configuration file provided by "Qt6" (requested
version 6.2) with any of the following names:
Qt6Config.cmake
qt6-config.cmake
Add the installation prefix of "Qt6" to CMAKE_PREFIX_PATH or set "Qt6_DIR"
to a directory containing one of the above files. If "Qt6" provides a
separate development package or SDK, be sure it has been installed.
`
Also, where do I put the model file(s) once I successfully install it?
Less of an idea to do with the tool or model, but a feature suggestion.
Would it be possible for there to be some kind of 'prompt' wiki or repository, whereby users (willing and able) to share prompts under creative commons CC-BY or CC-BY-SA terms , would be able to submit prompts for use by other individuals (subject to any ethical use policy that gets implemented.) ? The need for attribution on these is that lengthy prompts have of themselves a potentially creative element, and in line with 'free' (GNU sense) ideals, I'd like to share prompt ideas others may be able to build on.
Currently have the GPT-J model from Databricks (Dolly v2-12b) available and was wondering if I could map this to the overall GPT4All application?
This is specifically to be run on an M1 Max MBP w/OSX 13.1.x
LLM in reference is here: databricks/dolly-v2-12b
I installed the program and when running, it gives an error when playing a video file, apparently according to the picture below
https://filedn.com/lT1rqJefGodXboUQXukuGLh/photo/Screenshot_20230414_163910.png
Hi,
In VScode there is a plugin called Genie that uses ChatGPT and GPT (all models) as an aid to coding. It would be great to be able to use gpt4all-chat in a similar way. If it can be run locally and on my own system it would mean it could aid in better and easier coding and also detach from the OpenAI (now closed AI) system.
The extension in VSCode is called ChatGPT - Genie AI. It would be great if this can plugin to GPT4All instead of the OpenAI API.
This would be a fantastic enhancement and hopefully something of interest to you and the users.
Thank you
(In followup to Issue #4 )
In clarification and, for the avoidance of any doubt, the read-me and associated documentation, should indicate if mature, explicit or NSFW content can or cannot be generated with the model/toolset, provided that the content (or generation thereof) does not constitute a breach of appropriate legal or regulatory requirements in a given users jurisdiction or region. (You might also add applicable community standards here, but those can vary quite considerably.)
As well as above ideally, the read-me (or a separate ethical use policy document) should indicate if certain sensitive areas are allowed or disallowed. (The recent update made effectively already gives the " Don't use this for illegal or unethical things." line in most typical usage policy.)
Some sample areas of potential concern follow (this is not an exhaustive list.):-
*Content which contains overt political or ideological content, or which is intended to inform/influence the views or choices of a potential (competent) reader, on issues of public concern, or in an election. (Examples being campaign material, lobbying briefings or public service announcement "fillers".)
*The use of fictionalized representations of potentially identifiable individuals (living or deceased), corporations (both current and defunct) and prominent brands , franchises or trademarks associated with those individuals or corporations.
*Content which contains LGBTQI themes, including cross-dressing or explorations of non-binary and gender-fluid presentation.
*Content which whilst not containing (explicit) deceptions of actual sexual activity, may explore alternative sexuality, fetishes, or practices of a mutually consensual nature, between informed consenting adult participants.
*Use of profanity and pejoratives. (in an appropriate context)
*Deceptions of violence, crime, 'abuse' or self-harm. (in line with the editorial standards typically applied in print or other media.)
*Professional advice which would typically be made a qualified individual under regulatory supervision (such as Doctors, attorneys, financial advisers, architects and engineers, )
I know that this may seem to be overly cautious, but it would seem reasonable to have some kind of guidance document, beyond the typical "Do not do illegal, immoral or crazy things with this.." common in most open-source projects. Especially given that LLM's are getting media attention.
it seems like I have to download all the files from the installer.
Can add support for x86 systems, or portable versions.
I must be stupid, but when I click on https://huggingface.co/EleutherAI/gpt-j-6b I don't see how to download it. (I thought I had mastered the downloading thing).
Hi.
Is it possible to have a version for Win32 on Windows 7 ?
I'm fine with a console version as long as I can log the outputs somehow..
Thanks.
Install on Pop!_OS 22.04 LTS:
Install appears without error. Executing ./chat results:
blm@pop-os:/opt/gpt4/bin$ sudo chown blm:blm * && chmod 777 *
blm@pop-os:/opt/gpt4/bin$ ls -la
total 3748036
drwxrwxr-x 2 root root 4096 Apr 13 12:24 .
drwxr-xr-x 6 root root 4096 Apr 14 13:53 ..
-rwxrwxrwx 1 blm blm 52726552 Apr 13 12:24 chat
-rwxrwxrwx 1 blm blm 3785248281 Apr 12 03:23 ggml-gpt4all-j.bin
blm@pop-os:/opt/gpt4/bin$ ./chat
./chat: error while loading shared libraries: libxcb-cursor.so.0: cannot open shared object file: No such file or directory
eof
<>
@pop-os:/opt/gpt4$ cat InstallationLog.txt
************************************* Invoked: Fri Apr 14 13:35:06 2023
[0] Arguments: ./gpt4all-0.1.0-Linux.run
[11] Operations sanity check succeeded.
[13] Using metadata cache from "/home/blm/.cache/qt-installer-framework/0973be00-4d7e-3bb7-a6fa-28e764f3a340"
[13] Found 0 cached items.
[13] Language: en-US
[4960] Fetching latest update information...
[5487] Retrieving meta information from remote repository...
[5691] Extracting meta information...
[5697] Updating local cache with 1 new items...
[5925] Loading component scripts...
[23588] Selected components without dependencies:
gpt4all
[23590] Installation space required: "3.86 GB" Temporary space required: "3.17 GB" Local repository size: "0.00 bytes"
[23592] Cache and install directories are on the same volume. Volume mount point: "/" Free space available: "413.63 GB"
[25114] backup operation: Mkdir
[25114] - arguments: /opt/gpt4
[25115] Done
[25115] perform operation: Mkdir
[25115] - arguments: /opt/gpt4
[25115] Done
[25115] Fallback: "/home/blm/Downloads/gpt4all-0.1.0-Linux.run --start-server PRODUCTION,/tmp/{dcb31162-fe8f-48cc-9b2e-717c690ea055},{532293a4-4fad-45a8-8e3e-5adfe0d8ca0d}"
[30499] perform operation: Mkdir
[30499] - arguments: /opt/gpt4
[30524] Done
[30526] Preparing the installation...
[30526] Install size: 1 components
[30527] Downloading packages...
[30528] Downloading archive "0.1.0bin.7z.sha1" for component gpt4all.
[30881] Downloading archive "0.1.0bin.7z" for component gpt4all.
[587154] Downloading archive "0.1.0lib.7z.sha1" for component gpt4all.
[587546] Downloading archive "0.1.0lib.7z" for component gpt4all.
[589627] Downloading archive "0.1.0content.7z.sha1" for component gpt4all.
[589970] Downloading archive "0.1.0content.7z" for component gpt4all.
[590413] Preparing to unpack components...
[590420] backup gpt4all concurrent operation: Extract
[590420] - arguments: installer://gpt4all/0.1.0content.7z, /opt/gpt4
[590420] Started
[590420] backup gpt4all concurrent operation: Extract
[590420] - arguments: installer://gpt4all/0.1.0bin.7z, /opt/gpt4
[590420] Started
[590421] backup gpt4all concurrent operation: Extract
[590421] - arguments: installer://gpt4all/0.1.0lib.7z, /opt/gpt4
[590421] Started
[590437] Unpacking components...
[590438] perform gpt4all concurrent operation: Extract
[590438] - arguments: installer://gpt4all/0.1.0bin.7z, /opt/gpt4
[590438] Started
[590438] perform gpt4all concurrent operation: Extract
[590438] - arguments: installer://gpt4all/0.1.0lib.7z, /opt/gpt4
[590438] Started
[590439] perform gpt4all concurrent operation: Extract
[590439] - arguments: installer://gpt4all/0.1.0content.7z, /opt/gpt4
[590439] Started
[1085573] Installing component gpt4all
[1085579] backup gpt4all operation: CreateDesktopEntry
[1085579] - arguments: /usr/share/applications/GPT4AllChat.desktop, Type=Application
Terminal=false
Exec="/opt/gpt4/bin/chat"
Name=GPT4All-Chat
Icon=/opt/gpt4/logo-48.png
Name[en_US]=GPT4All-Chat
[1085582] Done
[1085582] perform gpt4all operation: CreateDesktopEntry
[1085582] - arguments: /usr/share/applications/GPT4AllChat.desktop, Type=Application
Terminal=false
Exec="/opt/gpt4/bin/chat"
Name=GPT4All-Chat
Icon=/opt/gpt4/logo-48.png
Name[en_US]=GPT4All-Chat
[1085624] Done
[1085625] backup gpt4all operation: Copy
[1085625] - arguments: /usr/share/applications/GPT4AllChat.desktop, /home/blm/Desktop/GPT4AllChat.desktop
[1085627] Done
[1085628] perform gpt4all operation: Copy
[1085628] - arguments: /usr/share/applications/GPT4AllChat.desktop, /home/blm/Desktop/GPT4AllChat.desktop
[1085934] Done
[1085935] backup gpt4all operation: License
[1085935] - arguments:
[1085935] Done
[1085935] perform gpt4all operation: License
[1085935] - arguments:
[1085974] Done
[1086075] Writing maintenance tool: "/opt/gpt4/maintenancetool.new"
[1086076] Writing maintenance tool.
[1091510] Wrote permissions for maintenance tool.
[1092224] Maintenance tool hard restart: false.
[1092228] Installation finished!
************************************* Invoked: Fri Apr 14 14:05:08 2023
[0] Arguments: /opt/gpt4/maintenancetool
[23] Operations sanity check succeeded.
[25] Using metadata cache from "/home/blm/.cache/qt-installer-framework/0973be00-4d7e-3bb7-a6fa-28e764f3a340"
[25] Found 1 cached items.
[26] Language: en-US
[8304] Fetching latest update information...
[13138] Fetching latest update information...
[13165] Loading component scripts...
eof eom
This problem was mentioned under a different heading
When I try to run ./chat I get
./chat: /lib/x86_64-linux-gnu/libm.so.6: version GLIBC_2.35' not found (required by ./chat) ./chat: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version
GLIBCXX_3.4.30' not found (required by ./chat)
./chat: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.29' not found (required by ./chat) ./chat: /lib/x86_64-linux-gnu/libc.so.6: version
GLIBC_2.33' not found (required by ./chat)
./chat: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.32' not found (required by ./chat) ./chat: /lib/x86_64-linux-gnu/libc.so.6: version
GLIBC_2.34' not found (required by ./chat)
./chat: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.33' not found (required by /home/charles/gpt4all0.1.0/bin/../lib/libicuuc.so.70) ./chat: /lib/x86_64-linux-gnu/libc.so.6: version
GLIBC_2.34' not found (required by /home/charles/gpt4all0.1.0/bin/../lib/libicuuc.so.70)
Good morning,
As you certainly guess, I have a little concern. I am under Ubuntu 22.04 and here is the result of my order ./hat in the directory concerned:
QML debugging is enabled. Only uses this in a safe environment.
GPTJ_MODEL_load: Loading Model from 'GGML-GPT4ALL-J.BIN'-please Wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
GPTJ_MODEL_load: GGML CTX SIZE = 5401.45 MB
Instruction not allowed (core dumped)
This leaves me speechless.
Thank you for your help
Full message -- "./gpt4all-0.1.0-Linux.run: error while loading shared libraries: libxcb-xinerama.so.0: cannot open shared object file: No such file or directory"
Happens during installer phase. Running Ubuntu 20.04.
After download and install of version 0.1.0 I ran into some issues.
./chat: error while loading shared libraries: libjpeg.so.8: cannot open shared object file: No such file or directory
I tried installing libjpeg-turbo-devel but that did not help. After some searching I ran the following commands which fixes the error:
sudo dnf copr enable aflyhorse/libjpeg && sudo dnf install libjpeg8
I run the chat app again:
./chat: error while loading shared libraries: libxcb-cursor.so.0: cannot open shared object file: No such file or directory
To fix this error I ran:
sudo dnf install xcb-util-cursor
Success!
./chat
QML debugging is enabled. Only use this in a safe environment.
gptj_model_load: loading model from 'ggml-gpt4all-j.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
gptj_model_load: ggml ctx size = 5401.45 MB
gptj_model_load: memory_size = 1792.00 MB, n_mem = 57344
gptj_model_load: ................................... done
gptj_model_load: model size = 3609.38 MB / num tensors = 285
It took a minute to load and my whole screen froze while it did so, but after my browser with many tabs open crashed; the chat window opened.
Hope this is helpful.
Will you be adding earlier compatibility?
MacBookPro 16' 2019, MacOs Ventura 13
Is it possible to make the version of GPT4ALL installer for Intel Macs and for macOS 10.13?
It is great to have an option to update all components from one place.
So, unlike this issue #19, I am able to run it just fine (no errors or anything). But when I launch it, I get a grey screen.
The programme does not crash, and I can still use it (find the input box and talk to the bot just fine), it's just that no UI is present.
To be fair, I am running this inside a virtual machine to test it out before using it in my real machine. But I feel this shouldn't be an problem anyway? Unless QT is actively trying to use the graphics card to draw the UI. But I still don't think that would make it not show. I will investigate on my own. These are my (real) machine specs, just in case.
OS: Windows 11
RAM: 32GB
CPU: Intel(R) Core(TM) i5-9600K CPU @3.70GHz
GPU: Radeon RX 580 Series 8GB VRAM
Virtualisation Software: VMware Player 16
I've installed GPT4ALL-J via the instructions provided on the GitHub. Left everything in the installer as is, otherwise you get permission errors while copying file to .desktop. When I go to the directory under /opt/gpt4all/bin and try to start the chat-program, nothing happens. It drops immediately back to the prompt, no error-message, no window.
Ubuntu 22.04 (fully updated)
Asus Z390-H
Intel I5 - 9600K
32 GB memory
Asus R390X (8GB)
So frustrated trying to use it.
Try to install from github,
This is from memory, but it'll provide a starting point:
sudo chmod +x ./gpt4all-installer.run
sudo apt install libxcb0
sudo ./gpt4all-installer.run
That will likely get you installed, following the instructions. I personally don't put it in /opt/ but suit yourself.
After that, cd into the appropriate directory. To run the updater:
sudo ./maintenancetool
To run the GPT-J chat program:
./bin/chat
Everything went smoothly until this error.
No problems during download nor installations.
Any ideas?
any chances to load a spanish and other languages models?
After INSTALL i click to CHAT.EXE FILE on bin FOLDER but nothing happens no window of chat opend
It would be very useful to have an http api for this.
A primary usecase will be running this as a game-server npc chatbot backend, i.e. letting the AI control npc dialogue. Having an api that can be accessed over local and remote http would be very useful, as it would allow integration with any game (minetest, minecraft, your_game_here), and also it would allow actually streaming responses.
A generic http api would be useful for many other applications, such as python apps that want to loosely integrate this.
Is there a way to install via downloading the 0,1.0bin.7z file with download manager because i have tried for hours now to do the installer but is stops downloading and starts over again.
"Cannot open “gpt4all-0.1.0-Darwin.app” because the developer cannot be verified."
Apple M1 内存 8 GB. macos 13.3.1 (22E261)
There is no document explaining what types of content genration is allowed or use cases, other than the Apache2 license on the model itself.
For certain uses, clarity on what is and isn't allowed would be desirable.
Confirmation as to the acceptability of "Creative Commons" licenses for generated outputs would also be appreciated.
Installation crashes on a french system language because the installer tries to copy /usr/share/applications/GPT4AllChat.desktop to $HOME/Desktop whereas on a french system the Desktop directory is $HOME/Bureau
I had the above error while trying to run it on linux mint and i fixed it by installing the missing libray with the command below:
sudo apt install libxcb-cursor0
I hope this helps someone.
I am running zorin os 16 when i run chat excutable it gives me this error :
./chat: /lib/x86_64-linux-gnu/libm.so.6: version GLIBC_2.35' not found (required by ./chat) ./chat: /lib/x86_64-linux-gnu/libstdc++.so.6: version
GLIBCXX_3.4.30' not found (required by ./chat)
./chat: /lib/x86_64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.29' not found (required by ./chat) ./chat: /lib/x86_64-linux-gnu/libc.so.6: version
GLIBC_2.33' not found (required by ./chat)
./chat: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.32' not found (required by ./chat) ./chat: /lib/x86_64-linux-gnu/libc.so.6: version
GLIBC_2.34' not found (required by ./chat)
./chat: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.33' not found (required by /home/mohamed/AI/gpt4all/bin/../lib/libicuuc.so.70) ./chat: /lib/x86_64-linux-gnu/libc.so.6: version
GLIBC_2.34' not found (required by /home/mohamed/AI/gpt4all/bin/../lib/libicuuc.so.70)
Is there any way can add a feature let the user choose to download or not during the installation process?
"Cannot open “gpt4all-0.1.0-Darwin.app” because the developer cannot be verified."
Apple M1 内存 8 GB. macos 13.3.1 (22E261)
You can’t open the application “chat.app” because this application is not supported on this Mac.
this is the modal, whenever i try to open chat.app
my macos version - 13.3.1 (22E261)
I would rather just install locally, on my user.
The installer from https://gpt4all.io/index.html insists on doing a system install.
Hi there, thanks for making this available.
I assume because it's just been released, it's getting hammered but the model download has timed out 3 or 4 times now and doesn't support session resume.
Might be worth adding this feature if possible to an updated installer version.
I would love to run node services against this local instance if possible. Has anyone tried to do that yet?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.