Comments (7)
@triptu would love your contribution if you have time! yeah i agree, i've so far taken the easiest route of printing but having an explicit logger might be useful (might also be good to think about what to do with the verbose
option scattered everywhere)
from llama_index.
not for now (setting verbose=False is the safest bet but i know some indices still have print statements). i'll investigate how to make printing logs better.
in the meantime you can do something hacky like this: https://stackoverflow.com/questions/8391411/how-to-block-calls-to-print
from llama_index.
As of 0.4.29, root logger calls have been replaced with module logger calls.
So you should see something like
INFO:llama_index.token_counter.token_counter:> [query] Total LLM token usage: 101 tokens
INFO:llama_index.token_counter.token_counter:> [query] Total embedding token usage: 1 tokens
In your logs now.
To disable these you can add something like:
logger = logging.getLogger('llama_index')
logger.setLevel(logging.WARNING)
That will result in llama_index only logging warnings.
If it's a specific submodule you can increase the verbosity specifically
logger = logging.getLogger('llama_index.token_counter')
logger.setLevel(logging.WARNING)
from llama_index.
Will you be interested in a PR for this? Do you have any preferred approach for something like this? I think a good way is to use the inbuilt logging module.
from llama_index.
Going to tackle this @triptu please lmk if you have already started.
Approach:
- Think I am going to add a root logger (logging library) that gets pulled at class instantiation in the various base classes
- Thinking of adding root logging config at the module level and removing all the verbose=True arguments everywhere
- Will be able to set GPT_INDEX_LOG_LEVEL=foo as an environment variable or specify it in code.
Notes:
from llama_index.
as of 0.4.0, this issue should be resolved
from llama_index.
@jerryjliu I do not think this is resolved yet. I still get the following logs:
INFO:root:> [query] Total LLM token usage: 101 tokens
INFO:root:> [query] Total embedding token usage: 1 tokens
When trying to change logging configuration, I get more logs, in addition to these (repeated).
I even tried things like the following with no effects:
with open(os.devnull, "w") as f, contextlib.redirect_stdout(f):
index.query("<QUERY>")
from llama_index.
Related Issues (20)
- [Bug]: openai http_client type error HOT 6
- [Bug]: OpenSearch ConnectionError(Timeout context manager should be used inside a task) HOT 2
- use route query engine : http pool time out error HOT 6
- [Bug]: Marvin Metadata Extractor Demo code not working HOT 1
- [Bug]: BM25 Retriever - Corpus uses default MetadataMode while reading content from nodes instead of MetadataMode.EMBED or user provided option HOT 1
- [Question]: How can i get all nodes from the PGVectorDB? HOT 1
- [Bug]: AttributeError: 'tuple' object has no attribute 'score' HOT 1
- Difference between using a persistent storage like S3 vs Using a Vector DB to store data in LLAMA INDEX[Question]: HOT 3
- [Question]: Is `llama_index` thead-safe? HOT 5
- [Bug]: Bedrock Cohere embeddings are not working as expected. HOT 5
- [Bug]: Code in Guidance Pydantic Program doc not working HOT 3
- [Bug]: NameError: name 'AgentChatResponse' is not defined in Using Meta-Llama-3-70B-Instruct with HuggingFace Inference API HOT 6
- [Question]: How to create a multiDocagent using function call with bedrock llms HOT 8
- [Question]: Chat engine takes long time to generate output for the first query HOT 3
- [Bug]: FirestoreKVStore's aget_all raises AttributeError when collection is not empty HOT 1
- [Feature Request]: Please support stream_chat for vllm
- [Question]: BM25 Retriever takes long time to load with docstore are its parameter HOT 2
- [Documentation]: notebook docs/examples/prompts/prompts_rag.ipynb not working HOT 9
- [Question]: AttributeError: 'NoneType' object has no attribute 'search' HOT 2
- [Question]: why PostgresKVStore table class does not match my postgres vector table schema? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llama_index.