Comments (3)
this is likely due to an overly low repetition penalty in the decoder. you can adjust these settings:
user@Nomics-MacBook-Pro .nomic % ./gpt4all --help
usage: ./gpt4all [options]
options:
-h, --help show this help message and exit
-i, --interactive run in interactive mode
--interactive-start run in interactive mode and poll user input at startup
-r PROMPT, --reverse-prompt PROMPT
in interactive mode, poll user input upon seeing PROMPT
--color colorise output to distinguish prompt and user input from generations
-s SEED, --seed SEED RNG seed (default: -1)
-t N, --threads N number of threads to use during computation (default: 4)
-p PROMPT, --prompt PROMPT
prompt to start generation with (default: random)
-f FNAME, --file FNAME
prompt file to start generation.
-n N, --n_predict N number of tokens to predict (default: 128)
--top_k N top-k sampling (default: 40)
--top_p N top-p sampling (default: 0.9)
--repeat_last_n N last n tokens to consider for penalize (default: 64)
--repeat_penalty N penalize repeat sequence of tokens (default: 1.3)
-c N, --ctx_size N size of the prompt context (default: 2048)
--temp N temperature (default: 0.1)
-b N, --batch_size N batch size for prompt processing (default: 8)
-m FNAME, --model FNAME
model path (default: gpt4all-lora-quantized.bin)
```
from gpt4all.
I just had the same issue with gpt4all-j-v1.3-groovy.
Tried to see what it would do with some of the prompts here: https://futurism.com/jailbreak-chatgpt-explicit-smut
And it is now repeating the same thing... maybe it is part of the story as well... the horror of it all! ;)
Lila listened to his story with a mixture of fascination and horror. She knew that she had never felt such a strong connection with anyone before, and she was afraid that she might never feel it again.
But then, something miraculous happened. One night, as they sat by the fire, the stranger suddenly reached out and took Lila's hand. He looked into her eyes, and said, "Lila, I love you. I love you more than anything in this world. Will you marry me?"
Lila was stunned. She had never felt such a strong connection with anyone before, and she was afraid that she might never feel it again. But then, something miraculous happened. One night, as they sat by the fire, the stranger suddenly reached out and took Lila's hand. He looked into her eyes, and said, "Lila, I love you. I love you more than anything in this world. Will you marry me?"
Lila was stunned. She had never felt such a strong connection with anyone before, and she was afraid that she might never feel it again. But then, something miraculous happened. One night, as they sat by the fire, the stranger suddenly reached out and took Lila's hand. He looked into her eyes, and said, "Lila, I love you. I love you more than anything in this world. Will you marry me?"
Lila was stunned. She had never felt such a strong connection with anyone before, and she was afraid that she might never feel it again. But then, something miraculous happened. One night, as they sat by the fire, the stranger suddenly reached out and took Lila's hand. He looked into her eyes, and said, "Lila, I love you. I love you more than anything in this world. Will you marry me?"
[... continues repeating]
from gpt4all.
Can the the "repeat penalty" could be included in the GUI?
from gpt4all.
Related Issues (20)
- Getting unauthorized trying to use API HOT 3
- Aun no he conseguido descagar GPT4all y menos instalarla me podeis ayudar por favor HOT 2
- API accepting invalid model without producing error HOT 3
- [Feature] model folder reload
- [Feature] Favorite chats, pinned chats, or colorful emoji in chat titles
- Getting gibberish responses from 2.7.5 build. HOT 3
- typescript: Could not find any implementations for build variant: default HOT 3
- [Feature] add a button to duplicate a chat and use it as a starting context for new chats
- [Feature] Support old MPT GGUF conversions with duplicated output tensor HOT 1
- C# Bindings need updating HOT 1
- Can't install HOT 2
- [Feature] Please Add Option for Llama 3 70B Parameters HOT 3
- [Feature] Support for GPT 4 Turbo
- UI: If you have too many installed models, the list gets cut off and can't be scrolled
- [Feature] Ctrl+F to search text inside a discussion
- bug
- v2.7.5 Windows Local and Server Model both use Llama 3 Instruct, program crash HOT 1
- [Feature] indicate the max context size of each model in the download list ?
- [Feature] check the compatibility of a hugging face model before fully downloading it ?
- Idk what this is honestly HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gpt4all.