Git Product home page Git Product logo

gptty's Introduction

gptty's People

Contributors

hyp-er avatar signebedi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gptty's Issues

[context] allow positional context passing

Allow user to pass positional information as a tag to :context and to questions, see:

...
:context[a:b] - show context history with optional index ranges a and b *under development*
...
To pass context positionally, prepend questions with tags like `[a:b] what is the meaning of life`.

We will need to modify the context.py logic to stringify the index range.

[runtime] gracefully handle keyboard interrupts in `chat`

When we want to re-write our queries in chat, we should be able to use keyboard interrupts without breaking the program. Something like:

        # Get user input
        try:
            i = await session.prompt_async(ANSI(f"{CYAN}> {RESET}"))
            # i = await ainput(f"{CYAN}> ")
            tag,question = get_tag_from_text(i)
            prompt_length = len(question)

        # handle keyboard interrupt
        except KeyboardInterrupt:
            i = None

[docs] add some guidelines for bash scripting

There area couple ways to script the CLI using bash. For example, you can automate the process of sending multiple questions to the gptty query command using a bash script. This can be particularly useful if you have a list of questions stored in a file, and you want to process them all at once. For example, let's say you have a file questions.txt with each question on a new line, like below.

What are the key differences between machine learning, deep learning, and artificial intelligence?
How do I choose the best programming language for a specific project or task?
Can you recommend some best practices for code optimization and performance improvement?
What are the essential principles of good software design and architecture?
How do I get started with natural language processing and text analysis in Python?
What are some popular Python libraries or frameworks for building web applications?
Can you suggest some resources to learn about data visualization and its implementation in Python?
What are some important concepts in cybersecurity, and how can I apply them to my projects?
How do I ensure that my machine learning models are fair, ethical, and unbiased?
Can you recommend strategies for staying up-to-date with the latest trends and advancements in technology and programming?

You can send each question from the questions.txt file to the gptty query command using the following bash one-liner:

xargs -d '\n' -I {} gptty query --question "{}" < questions.txt

[model] add support for chatcompletion models

add support for chatcompletion models
Currently, we only support completion models. Subsequently, we will need to find a way to support chat completion too.

References

  1. chat completion source code python https://github.com/openai/openai-python/blob/main/openai/api_resources/chat_completion.py#L8
  2. chat completion create https://platform.openai.com/docs/api-reference/chat/create
  3. chat completions general introduction https://platform.openai.com/docs/guides/chat
  4. list of models https://platform.openai.com/docs/models/overview

Originally posted by @signebedi in #30 (comment)

Deploy v.0.2.1

Add v.0.2.1 with the modifications we've made to existing functionality, like log subcommand and improved user documentation.

[config] add support for a `gptty.conf` file

This should include things like:

  • API_KEY: your OpenAI API key
  • GPT_VERSION: [future] pivot between GPT 3 and 4
  • YOUR_NAME: set this to Joe, and questions will be tagged as such
  • GPT_NAME: set this to Bob, and responses will be tagged as such
  • OUTPUT_FILE: path to file to write output to

Using arrow keys to edit question prints escape characters

When trying to edit my query, you can't use your arrow keys to traverse the prompt - one extremely necessary feature when working with the TTY. It gets especially annoying when you have to delete half your prompt to fix one simple spelling mistake, which can yield incorrect responses.

[bug] openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

Using gpt-3.5-turbo model:

[question] tell me three interesting world capital cities 
.....    Traceback (most recent call last):
  File "/home/sig/Code/gptty/venv/bin/gptty", line 11, in <module>
    load_entry_point('gptty', 'console_scripts', 'gptty')()
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/home/sig/Code/gptty/gptty/__main__.py", line 77, in chat
    asyncio.run(chat_async_wrapper(config_path))
  File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/home/sig/Code/gptty/gptty/__main__.py", line 107, in chat_async_wrapper
    await create_chat_room(configs=configs, config_path=config_path)
  File "/home/sig/Code/gptty/gptty/gptty.py", line 137, in create_chat_room
    response = await response_task
  File "/home/sig/Code/gptty/gptty/gptty.py", line 43, in fetch_response
    return await openai.Completion.acreate(
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_resources/completion.py", line 45, in acreate
    return await super().acreate(*args, **kwargs)
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
    response, _, api_key = await requestor.arequest(
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_requestor.py", line 310, in arequest
    resp, got_stream = await self._interpret_async_response(result, stream)
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_requestor.py", line 645, in _interpret_async_response
    self._interpret_response_line(
  File "/home/sig/Code/gptty/venv/lib/python3.8/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

https://stackoverflow.com/questions/75774873/openai-chatgpt-gpt-3-5-api-error-this-is-a-chat-model-and-not-supported-in-t

[runtime] add response formatting options

Suggestion via github:

Can you fix your program so that it preserves new line breaks? The output I'm seeing throws away GPT output formatting and puts all of the response text on one line. Paragraph breaks are lost, list item breaks are lost, etc...
Perhaps make this one of your config options,
new_lines: preserve | discard

Response formatting could definitely be better, and the best solution would be to give users the ability to select the formatting options.

What I've struggled to do is find the hook to accomplish this within the openai python API. We are using the Completion class of the openai python API. [1] Specifically, we are using the acreate method, which is an async wrapper for the create method. [2] This does not seem to retain the formatting when it provides a response. That said, I'm still delving into the API and might still find something in there that would make this easier. At the end of the day, nothing is stopping us from writing our own application logic to apply formatting onto a plaintext response, but that seems ... especially inelegant if the openai API provides that functionality.

Reference
[1] https://github.com/openai/openai-python/blob/main/openai/api_resources/completion.py#L9
[2] https://platform.openai.com/docs/api-reference/completions/create

[runtime] add a `--quiet` tag for `query` subcommand

Sometimes users may not want the response printed to stdout but instead just want query responses written to the output_file designated in the config. In these cases, we should allow them to add a --quiet tag.

[model] add model validation

We should validate that the model passed is an actual model.

def get_available_models():
    response = openai.Model.list()
    return [model.id for model in response['data']]

def is_valid_model(model_name):
    available_models = get_available_models()
    return model_name in available_models

def validate_model_type(model_name):
    if ('davinci' in model_name or 'curie' in model_name) and is_valid_model(model_name):
        return 'completion'
    elif 'gpt' in model_name and is_valid_model(model_name):
        return 'chat completion'
    else:
        raise Exception()

....


    # Set the parameters for the OpenAI completion API
    model_engine = configs['model'].rstrip('\n')

    try:
        model_type = validate_model_type(model_engine)
    except:
        click.echo(f"{RED}FAILED to validate the model name '{model_engine}'. Are you sure this is a valid OpenAI model? Check the available models at <https://platform.openai.com/docs/models/overview> and try again.{RESET}")
        return

Install with pypi

We should add a pip installation hook by adding a setup.py and manfest file.

[runtime] add a check for internet connectivity and return error if no connection

See eg. https://stackoverflow.com/a/62115290/13301284.

If the user runs the chatroom or query, but there is no internet connection, return an error message in main.py.

import socket
from contextlib import closing

def has_internet_connection(host="google.com", port=443, timeout=3):
    """Check if the system has a valid internet connection.

    Args:
        host (str, optional): A well-known website to test the connection. Default is 'www.google.com'.
        port (int, optional): The port number to use for the connection. Default is 80.
        timeout (int, optional): The time in seconds to wait for a response before giving up. Default is 3.

    Returns:
        bool: True if the system has an internet connection, False otherwise.
    """
    try:
        with closing(socket.create_connection((host, port), timeout=timeout)):
            return True
    except OSError:
        return False



....



  # Print the text in cyan
  click.echo(f"{CYAN}{title}\nWelcome to gptty (v.{__version__}), a ChatGPT wrapper in your TTY.\nType :help in the chat interface if you need help getting started.{' Verbose / Debug mode is on.' if verbose else ''}{RESET}\n")
  
  if not os.path.exists(config_path):
      click.echo(f"{RED}FAILED to access app config file at {config_path}. Are you sure this is a valid config file? Run `gptty chat --help` for more information. You can get a sample config at <https://github.com/signebedi/gptty/blob/master/assets/gptty.ini.example>.")
      return

  # load the app configs
  configs = get_config_data(config_file=config_path)
  
  if not has_internet_connection(config('verify_internet_endpoint')):
      click.echo(f"{RED}FAILED to verify connection at {config('verify_internet_endpoint')}. Are you sure you are connected to the internet?")
      return

  # create the output file if it doesn't exist
  with open (configs['output_file'], 'a'): pass

Project Roadmap

[runtime] add help commands

Using backslash commands similar to in the postgres runtime, we should give users the ability for meta functionality like quitting, getting add'l information, etc.

[runtime] add support for question chains / queues

Users may wish to pass a set of pre-defined questions with the same context (or, maybe even with different contexts) that they can have chat gpt process while they step away from their workstations.

We should add support for a question queue (maybe a file?), or conversely add support for passing multiple questions at runtime gptty run --question "Question 1 here" --question "Question 2 here" --tags 'Your-tag-here'.

A spool / queue file invites the creation of daemonized process to monitor and process that queue....

[context] add support for `--additional_context` option for `query` subcommand

We should add an --additional_context option to the query subcommand that expects either a string or file path, which it will bolt onto the request string (where? at the start? at the end?).

This allows users to extend their questions with details that might not typically be formatted like a question, like stack traces with line breaks, etc.

We should then write some logic to structure this context (removing line breaks, etc) and applying length limits and/or keyword tokenization to it before bolting it onto the request string.

[context] allow keyword-only context using RAKE to minimize API usage fees

OpenAI charges by eg. the number of tokens, meaning that passing a full 5000 words of context will become very costly (see eg. https://platform.openai.com/docs/api-reference/completions/create, where they mention fees for max_token_length). We can probably manage this by adding a Rapid Automatic Keyword Extraction (RAKE) algorithm to pull out the keywords in the context.

from rake_nltk import Rake

# Uses stopwords for english from NLTK, and all puntuation characters by
# default
r = Rake()

# Extraction given the text.
r.extract_keywords_from_text(<text to process>)

We can configure this as an app config.

[main]
context_keywords_only=1

[runtime] add bool config log_reponses

We should let users decide whether to log responses, by setting a bool config log_responses that default to true. Then, if it assesses false, we don't create or write to the output file. We also do not maintain the pandas data frame, and we disable the :l[og] metacommand option within the chat.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.