Git Product home page Git Product logo

gpt-j's Introduction

Notice

Until Futher notice this API is officialy down, until I can manage to access gptj or use a new model completely

GPT-J

A GPT-J API to use with python

Installing gpt-j

pip install gptj

Parameters

prompt: the prompt you wish to give to the model

tokens: the number of tokens to generate (values 204 or less are recommended)

temperature: controls the randomness of the model. higher values will be more random (suggestest to keep under 1.0 or less, something like 0.3 works)

top_p: top probability will use the most likely tokens

top_k: Top k probability

rep: The likely hood of the model repeating the same tokens lower values are more repetative

Advanced Parameters

user: the speaker the person who is giving gpt-j a prompt

bot: an imaginary character of your choice

context: the part of the prompt that explains what is happening in the dialog

examples: a dictionary of user intentions and how the bot should respond

Basic Usage

In the prompt enter something you want to generate

from gpt_j.Basic_api import simple_completion

prompt = "def perfect_square(num):"

The maximum length of the output response

max_length = 100

Temperature controls the creativity of the model

A low temperature means the model will take less changes when completing a prompt

A high temperature will make the model more creative

Both temperature and top probability must be a float

temperature = 0.09

top probability is an alternative way to control the randomness of the model

If you are using top probability set temperature one

If you are using temperature set top probability to one

top_probability = 1.0

top k is an integer value that controls part of the model

top_k = 40

Repetition penalty will result in less repetative results

repetition = 0.216

Initializing the SimpleCompletion class

Here you set query equal to the desired values

Note values higher than 512 tend to take more time to generate

query = simple_completion(prompt, length=max_length, temp=temperature, top_p=top_probability, top_k=top_k, rep=repetition)

Finally run the function below

print(query)

Advanced Usage

Context is a string that is a description of the conversation

from gpt_j.gptj_api import Completion

context = "This is a calculator bot that will answer basic math questions"

Examples should be a dictionary of {user query: the way the model should respond to the given query} list of examples

Queries are to the left while target responses should be to the right

Here we can see the user is asking the model math related questions

The way the model should respond if given on the right

DO NOT USE PERIODS AT THE END OF USER EXAMPLE!

examples = {
    "5 + 5": "10",
    "6 - 2": "4",
    "4 * 15": "60",
    "10 / 5": "2",
    "144 / 24": "6",
    "7 + 1": "8"}

Here you pass in the context and the examples

context_setting = Completion(context, examples)

Enter a prompt relevant to previous defined user queries

prompt = "48 / 6"

Pick a name relevant to what you are doing

Below you can change student to "Task" for example and get similar results

User = "Student"

Name your imaginary friend anything you want

Bot = "Calculator"

Max tokens is the maximum length of the output response

max_tokens = 50

Temperature controls the randomness of the model

A low temperature means the model will take less changes when completing a prompt

A high temperature will make the model more creative and produce more random outputs

A Note both temperature and top probability must be a float

temperature = 0.09

Top probability is an alternative way to control the randomness of the model

If you are using it set temperature one

If you are using temperature set top probability to one

top_probability = 1.0

top k is an integer value that controls part of the model

top_k = 40

Repetition penalty will result in less repetative results

repetition = 0.216

Simply set all the give all the parameters

Unfilled parameters will be default values

I recommend all parameters are filled for better results

Once everything is done execute the code below

response = context_setting.completion(prompt,
              user=User,
              bot=Bot,
              max_tokens=max_tokens,
              temperature=temperature,
              top_p=top_probability,
              top_k=top_k,
              rep=reptition)

Last but not least print the response

Please be patient depending on the given parameters it will take longer sometimes

For quick responses just use the Basic API which is a simplified version

print(response)

Note: This a very small model of 6B parameters and won't always produce accurate results

Disclaimer

I have removed the security from the API, please don't use for unethical use! I am not responsible for anything you do with the API

License and copyright

Credit

This is all possible thanks to https://github.com/vicgalle/gpt-j-api

Feel free to check out the original API

License

© Michael D Arana

licensed under the MIT License.

gpt-j's People

Contributors

theprotaganist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt-j's Issues

Could you open source the github?

Could you open source the package? It would be nice for users to be able to make modifications to the package that can then later be pushed to the main. Thank you in advance!

JSONDecodeError when trying to execute basic and advanced python scripts

Can you update how to fix this issue?

I am working on GPT-J to generate prompts and I am unable to run the code.

During handling of the above exception, another exception occurred:

I am getting the following error when I tried running python basic_usage_template.py and python advanced_usage_template.py

Traceback (most recent call last):
  File "/hdd2/srinath/gpt-j/advanced_usage_template.py", line 58, in <module>
    response = context_setting.completion(prompt,
  File "/hdd2/srinath/anaconda3/envs/SD_Aug/lib/python3.10/site-packages/gpt_j/gptj_api.py", line 67, in completion
    self.response = generate(f"{self.new_prompt} {user}: {self.main_intention}", token_max_length=max_tokens, temperature=temperature, top_p=top_p, top_k=top_k, rep=rep)
  File "/hdd2/srinath/anaconda3/envs/SD_Aug/lib/python3.10/site-packages/gpt_j/Gptj.py", line 12, in generate
    result = URL.json()
  File "/hdd2/srinath/anaconda3/envs/SD_Aug/lib/python3.10/site-packages/requests/models.py", line 975, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

My python version is 3.10.6

How do I include a stop sequence?

Hello, I am creating a chatbot with this API. I was wondering how I can add a stop sequence so it doesn't keep on generating text after the response.

Thank you very much!

Decompressing step_383500.tar.zstd

I'm unable to extract step_383500.tar.zstd.

I have a pretty simple issue.

This type of this file is Express Zip File Compression but when I try to open it with said tool, it claims that the file is not supported. I tried searching up the file type and got nothing. How am I suppose to decompress this file.

New API Limit?

Has anyone else been suddenly getting this message? 'Sorry, the public API is limited to around 20 queries per every 30 minutes'

Is the API currently down?

Getting an HTTP error: [WinError 10061] No connection could be made because the target machine actively refused it'

train

hi, how can I train it with my own training data through gogle colab?

context_setting.completion error

Hi,

I am getting the following error from context_setting.completion API.
I have tried both basic and adv example but similar issue.

Any pointers on how to resolve this?

Thanks!

/usr/lib/python3.7/json/decoder.py in raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

=================================================================

My code based on your example:

from gpt_j.Basic_api import simple_completion

In the prompt enter something you want to generate

prompt = "def perfect_square(num):"

Max length of completion

Max = 200

Temperature controls the creativity of the model

A low temperature means the model will take less changes when completing a prompt

A high temperature will make the model more creative

Both temperature and top probability must be a float

temperature = 0.6

top probability is an alternative way to control the randomness of the model

top_probability = 1.0

top_k is the number of top responses to return

This is the number of responses the model will return

top_k = 1

Rep is the number of times the model will repeat itself

This is the number of times the model will repeat itself when generating a response

rep = 0.216

Here you set query equal to the desired values

Note values higher that 512 tend to take more time to generate

res = context_setting.completion(prompt,
user=User,
bot=Bot,
max_tokens=max_tokens,
temperature=temperature,
top_p=top_probability,
top_k=top_k,
rep=repetition)

Finally we print the result

print(res)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.