Git Product home page Git Product logo

Comments (14)

BeMain avatar BeMain commented on August 15, 2024 1

The default model for gpt-migrate is gpt-4-32k. That was the model throwing the error, because I don't have access to it. If that's what you mean?

I'm not sure about the underlying code; I haven't checked the source myself. But this repo in combination with the stack trace I provided should be enough to figure out what exactly was throwing the error...I think? Let me know if there's something more you need.

from gpt-migrate.

krrishdholakia avatar krrishdholakia commented on August 15, 2024 1

Sounds good @BeMain - will follow up

from gpt-migrate.

mpmnath avatar mpmnath commented on August 15, 2024

Same issue here. I think you need to upgrade your API plan to use GPT-4 model. Readme file says GPT-4-32K is preferred

from gpt-migrate.

christian-ek avatar christian-ek commented on August 15, 2024

Getting the same problem, I only have access to gpt-4 and not gpt-4-32k and no way to upgrade from what I can see.

from gpt-migrate.

mpmnath avatar mpmnath commented on August 15, 2024

I was able to at least get this running now. but there is a rate limit error and the execution stops abruptly 😞

from gpt-migrate.

BeMain avatar BeMain commented on August 15, 2024

Oh, so that's what the error message means; that the model isn't available? That might be it, as using gpt-3.5-turbo or gpt-4 doesn't cause the error (but instead complains about token length). Strange, I tried this a few weeks back and think I got the error that the model isn't available (when trying to use gpt-4-32k) back then too, only I remember the error message as being much clearer. Maybe things have changed recently.

I too only have access to gpt-4, so perhaps the only option is waiting for #2 then?

from gpt-migrate.

BeMain avatar BeMain commented on August 15, 2024

I was able to at least get this running now. but there is a rate limit error and the execution stops abruptly 😞

What did you do different to get it to work?

from gpt-migrate.

mpmnath avatar mpmnath commented on August 15, 2024

I was able to at least get this running now. but there is a rate limit error and the execution stops abruptly 😞

What did you do different to get it to work?

used gpt-4 model. Since there is a rate limit for gpt-4 unlike gpt-4-32K, I used try and catch for all API calls in utils.py file to retry the request.

def llm_run(prompt,waiting_message,success_message,globals):
    
    output = ""
    with yaspin(text=waiting_message, spinner="dots") as spinner:
        try:
            output = globals.ai.run(prompt)
        except openai.error.RateLimitError as e:
            retry_time = e.retry_after if hasattr(e, 'retry_after') else 30
            time.sleep(retry_time)
            output = globals.ai.run(prompt)  # Retry the API request
        spinner.ok("✅ ")

    if success_message:
        success_text = typer.style(success_message, fg=typer.colors.GREEN)
        typer.echo(success_text)
    
    return output

from gpt-migrate.

krrishdholakia avatar krrishdholakia commented on August 15, 2024

Hey @BeMain @mpmnath @christian-ek

I'm the co-maintainer of litellm. We should have returned a better error message here. That error means that no valid model was found.

Can you show me the code to repro this problem? What was the model name that got passed in

from gpt-migrate.

stridera avatar stridera commented on August 15, 2024

Was there any update to this? It doesn't like gpt-4 at all, but gpt-4-turbo-preview seems to work but the tokens are incorrect.

gpt-4
openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.

gpt-4-turbo-preview
ValueError: No valid completion model args passed in - {'model': 'gpt-4-turbo-preview', 'messages': [{'role': 'user', 'content': '...'}], 'functions': [], 'function_call': '', 'temperature': 0.0, 'top_p': 1, 'n': 1, 'stream': False, 'stop': None, 'max_tokens': 10000, 'presence_penalty': 0, 'frequency_penalty': 0, 'logit_bias': {}, 'user': '', 'force_timeout': 60, 'azure': False, 'logger_fn': None, 'verbose': False, 'optional_params': {'temperature': 0.0, 'max_tokens': 10000}}

I'll start updating args based on the openai API.

from gpt-migrate.

st9824-att avatar st9824-att commented on August 15, 2024

I just downloaded this last week and starting setting up. I'm using gpt-4-32k and currently stuck on this error. How do I fix it?

from gpt-migrate.

krrishdholakia avatar krrishdholakia commented on August 15, 2024

Hey - we should have this fixed on our end (i.e. raising the right error here). I also don't see this line ValueError: No valid completion model args passed in in litellm's codebase anymore - https://github.com/search?q=repo%3ABerriAI%2Flitellm+%22ValueError%3A+No+valid+completion+model+args+passed+in%22&type=code

from gpt-migrate.

st9824-att avatar st9824-att commented on August 15, 2024

Interesting. I wonder if this is coming from one of the python libraries. Since I just downloaded the gpt migrate last week, I should have the latest code.

│ 270 │ │ raise InvalidRequestError(f"CohereException - {error_str}", f"{model}") │
│ 271 │ │ elif "CohereConnectionError" in exception_type: # cohere seems to fire these err │
│ 272 │ │ raise RateLimitError(f"CohereException - {original_exception.message}") │
│ > 273 │ raise original_exception # base case - return the original exception │
│ 274 │ else: │
│ 275 │ raise original_exception │
│ 276 │
│ │
│ ┌─────────────────────────────────────────── locals ───────────────────────────────────────────┐ │
│ │ error_str = "No valid completion model args passed in - {'model': │ │
│ │ 'openrouter/openai/gpt-4-32"+3349 │ │
│ │ exception_type = 'ValueError' │ │
│ │ model = 'openrouter/openai/gpt-4-32k' │ │
│ │ original_exception = ValueError('No valid completion model args passed in - {'model':

from gpt-migrate.

krrishdholakia avatar krrishdholakia commented on August 15, 2024

that looks like litellm but i don't think it's our latest version

Based on the requirements.txt i can see it's on a pretty old version -

Can you try this snippet on our latest version (v1.34.35) and let me know if you see it

#pip install litellm==1.34.35

from litellm import completion
import os

## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-4-32k", messages=messages)

print(response)

from gpt-migrate.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.