Git Product home page Git Product logo

Comments (6)

dosubot avatar dosubot commented on May 24, 2024 2

The issue you're encountering stems from a type mismatch between the expected HTTP client types in the llama_index.llms.openai module and the DefaultAsyncHttpxClient or DefaultHttpxClient you're attempting to use. The core of the problem is that the module is designed to work with a specific type of HTTP client, and using a different type, even if it's from the same library (httpx), leads to these errors.

To address this, you need to ensure that the HTTP client you pass to the llmOpenAi class matches the expected type. Since the module expects a synchronous httpx.Client, you should explicitly create an instance of this class and use it as the http_client parameter. Here's how you can adjust your code:

import httpx
from llama_index.llms.openai import OpenAI as llmOpenAi

# Create an instance of httpx.Client
httpclient = httpx.Client()

# Initialize the llmOpenAi class with the httpx.Client instance
llm = llmOpenAi(temperature=0, model=model_name, http_client=httpclient)

This modification ensures that the HTTP client passed to the llmOpenAi class is of the correct type, thereby avoiding the type errors you've been facing. If you run into any more issues or have further questions, don't hesitate to reach out.

Details

About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

from llama_index.

dosubot avatar dosubot commented on May 24, 2024

Hey @waveBoom, great to see you back! 😄 Looks like you've stumbled upon another intriguing puzzle for us to solve together. I'm diving into the details now and will circle back with a full answer shortly. Hang tight!

from llama_index.

waveBoom avatar waveBoom commented on May 24, 2024

when i change the httpclient = DefaultHttpxClient()
{
from openai import OpenAI, DefaultHttpxClient
from llama_index.llms.openai import OpenAI as llmOpenAi
httpclient = DefaultHttpxClient()
llm = llmOpenAi(temperature=0, model=model_name, http_client=httpclient)
}

the error is :

ERROR:server.py:handle_mentions:An error occurred:
Traceback (most recent call last):
File "/Users/bobo/practise/myGPTReader/app/server.py", line 252, in bot_process
gpt_response, total_llm_model_tokens, total_embedding_model_tokens = future.result(timeout=300)
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 445, in result
return self.__get_result()
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/bobo/practise/myGPTReader/app/gpt.py", line 304, in get_answer_from_llama_file_route_engine
answer = query_engine.query(dialog_messages)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 274, in wrapper
result = func(*args, **kwargs)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/base/base_query_engine.py", line 53, in query
query_result = self._query(str_or_query_bundle)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/query_engine/router_query_engine.py", line 182, in _query
responses.append(selected_query_engine.query(query_bundle))
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 274, in wrapper
result = func(*args, **kwargs)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/base/base_query_engine.py", line 53, in query
query_result = self._query(str_or_query_bundle)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 274, in wrapper
result = func(*args, **kwargs)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/query_engine/retriever_query_engine.py", line 190, in _query
response = self._response_synthesizer.synthesize(
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 274, in wrapper
result = func(*args, **kwargs)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/response_synthesizers/base.py", line 242, in synthesize
response_str = self.get_response(
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/response_synthesizers/tree_summarize.py", line 207, in get_response
summary_responses = run_async_tasks(tasks)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/async_utils.py", line 54, in run_async_tasks
outputs: List[Any] = asyncio.run(_gather())
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/async_utils.py", line 52, in _gather
return await asyncio.gather(*tasks_to_execute)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 307, in async_wrapper
result = await func(*args, **kwargs)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/llms/llm.py", line 515, in apredict
chat_response = await self.achat(messages)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/llms/callbacks.py", line 71, in wrapped_async_llm_chat
f_return_val = await f(_self, messages, **kwargs)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/llms/openai/base.py", line 575, in achat
return await achat_fn(messages, **kwargs)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/tenacity/_asyncio.py", line 47, in call
do = self.iter(retry_state=retry_state)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/tenacity/init.py", line 314, in iter
return fut.result()
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 438, in result
return self.__get_result()
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/tenacity/_asyncio.py", line 50, in call
result = await fn(*args, **kwargs)
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/llms/openai/base.py", line 618, in _achat
aclient = self._get_aclient()
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/llms/openai/base.py", line 250, in _get_aclient
self._aclient = AsyncOpenAI(**self._get_credential_kwargs())
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/openai/_client.py", line 334, in init
super().init(
File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/openai/_base_client.py", line 1375, in init
raise TypeError(
TypeError: Invalid http_client argument; Expected an instance of httpx.AsyncClient but got <class 'openai._DefaultHttpxClient'>

so so so , So I can't set any kind of http client type

from llama_index.

logan-markewich avatar logan-markewich commented on May 24, 2024

httpx.AsyncClient is the same as DefaultAsyncHttpxClient

I think the issue is the type is for httpx.Client instead of the async client

from llama_index.

logan-markewich avatar logan-markewich commented on May 24, 2024

tbh I'm not entirely sure what the issue is, it works fine for me

>>> from httpx import Client
>>> from llama_index.llms.openai import OpenAI
>>> llm = OpenAI(http_client=Client())
>>> llm.complete("Test")
CompletionResponse(text='Hello! How can I assist you today?', additional_kwargs={}, raw={'id': 'chatcmpl-9MdCFrjIKLuHwFmtwUV5U6AOHEZ5w', 'choices': [Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Hello! How can I assist you today?', role='assistant', function_call=None, tool_calls=None))], 'created': 1715180915, 'model': 'gpt-3.5-turbo-0125', 'object': 'chat.completion', 'system_fingerprint': None, 'usage': CompletionUsage(completion_tokens=9, prompt_tokens=8, total_tokens=17)}, logprobs=None, delta=None)

>>> from openai import DefaultHttpxClient
>>> llm = OpenAI(http_client=DefaultHttpxClient())
>>> llm.complete("Test")
CompletionResponse(text='Hello! How can I assist you today?', additional_kwargs={}, raw={'id': 'chatcmpl-9MdCugBPsZjMypjV1l55TtRKwzoCE', 'choices': [Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Hello! How can I assist you today?', role='assistant', function_call=None, tool_calls=None))], 'created': 1715180956, 'model': 'gpt-3.5-turbo-0125', 'object': 'chat.completion', 'system_fingerprint': None, 'usage': CompletionUsage(completion_tokens=9, prompt_tokens=8, total_tokens=17)}, logprobs=None, delta=None)
>>> 

from llama_index.

logan-markewich avatar logan-markewich commented on May 24, 2024

I think the actual issue here is just missing the option to provide an async http client instead of sync client

from llama_index.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.