Comments (7)
To load multiple HTML files and parse them for both unstructured text and structured tables, you can use the HTMLTagReader
and UnstructuredElementNodeParser
classes from the LlamaIndex framework. Here's a method to achieve this:
- Load HTML Files: Use
HTMLTagReader
to read the HTML files and extract text from specific tags. - Parse Content: Use
UnstructuredElementNodeParser
to handle mixed text and tables effectively.
Here's a complete example:
from pathlib import Path
from llama_index.readers.file.html.base import HTMLTagReader
from llama_index.core.node_parser.relational.unstructured_element import UnstructuredElementNodeParser
from llama_index.core.schema import Document
def load_html_directory(directory_path: str, tag: str = "section", ignore_no_id: bool = False):
html_reader = HTMLTagReader(tag=tag, ignore_no_id=ignore_no_id)
node_parser = UnstructuredElementNodeParser()
documents = []
directory = Path(directory_path)
for html_file in directory.glob("*.html"):
docs = html_reader.load_data(html_file)
for doc in docs:
document = Document(text=doc)
nodes = node_parser.get_nodes_from_documents([document])
documents.extend(nodes)
return documents
# Example usage
directory_path = "/path/to/html/files"
documents = load_html_directory(directory_path, tag="section", ignore_no_id=False)
# Now `documents` contains the parsed content from all HTML files in the directory
This method will load and parse all HTML files in the specified directory, extracting both unstructured text and structured tables for optimal retrieval and query results [1][2][3][4].
To continue talking to Dosu, mention @dosu.
from llama_index.
The error occurs because the Document
class expects a string for the text
attribute, but it is receiving a Document
object instead. You need to pass the text
attribute of the Document
object to the Document
constructor. Here is the corrected code:
from pathlib import Path
from llama_index.readers.file.html.base import HTMLTagReader
from llama_index.core.node_parser.relational.unstructured_element import UnstructuredElementNodeParser
from llama_index.core.schema import Document
def load_html_directory(directory_path: str, tag: str = "section", ignore_no_id: bool = False):
html_reader = HTMLTagReader(tag=tag, ignore_no_id=ignore_no_id)
node_parser = UnstructuredElementNodeParser()
documents = []
directory = Path(directory_path)
for html_file in directory.glob("*.html"):
docs = html_reader.load_data(html_file)
for doc in docs:
document = Document(text=doc.text, metadata=doc.metadata) # Pass the text and metadata attributes
nodes = node_parser.get_nodes_from_documents([document])
documents.extend(nodes)
return documents
# Example usage
directory_path = "/path/to/html/files"
documents = load_html_directory(directory_path, tag="section", ignore_no_id=False)
# Now `documents` contains the parsed content from all HTML files in the directory
This change ensures that the Document
class receives the text
attribute as a string, which should resolve the ValidationError
[1].
To continue talking to Dosu, mention @dosu.
from llama_index.
To set up the query configuration for a Retrieval-Augmented Generation (RAG) system using the LlamaIndex framework, you need to follow several stages: loading, indexing, storing, querying, and evaluation. Below is a code example that demonstrates how to configure the querying stage, which is crucial for a RAG system.
Code Example for Query Configuration in RAG
from llama_index.core import Document, VectorStoreIndex, SimpleDirectoryReader
from llama_index.core.retrievers import VectorIndexRetriever
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core.ingestion import IngestionPipeline
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.core.query_engine import CustomQueryEngine
from llama_index.core.response_synthesizers import BaseSynthesizer
from llama_index.core import get_response_synthesizer
from pathlib import Path
# Step 1: Load your data into Documents
loader = SimpleDirectoryReader(input_dir="./data")
documents = loader.load_data()
# Step 2: Create the pipeline with transformations
pipeline = IngestionPipeline(
transformations=[
SentenceSplitter(),
]
)
nodes = pipeline.run(documents=documents, show_progress=True)
# Step 3: Index your data
embed_model = OpenAIEmbedding()
index = VectorStoreIndex(nodes=nodes, embed_model=embed_model)
# Step 4: Configure the retriever
retriever = VectorIndexRetriever(index=index, similarity_top_k=10)
# Step 5: Define a Custom Query Engine
class RAGQueryEngine(CustomQueryEngine):
"""RAG Query Engine."""
retriever: BaseRetriever
response_synthesizer: BaseSynthesizer
def custom_query(self, query_str: str):
nodes = self.retriever.retrieve(query_str)
response_obj = self.response_synthesizer.synthesize(query_str, nodes)
return response_obj
# Step 6: Configure the Query Engine
synthesizer = get_response_synthesizer(response_mode="compact")
query_engine = RAGQueryEngine(
retriever=retriever, response_synthesizer=synthesizer
)
# Step 7: Query the index
query = "How tall is the smallest penguin?"
response = query_engine.query(query)
print(response)
Explanation
- Loading: Load your data into
Document
objects usingSimpleDirectoryReader
. - Pipeline: Create an ingestion pipeline with transformations like
SentenceSplitter
. - Indexing: Create an index using
VectorStoreIndex
and add your documents to it. - Querying:
- Retriever: Set up a
VectorIndexRetriever
to fetch relevant context from the index. - Custom Query Engine: Define a
RAGQueryEngine
that performs retrieval and synthesis.
- Retriever: Set up a
- Query Execution: Execute a query and get the response.
This code provides a basic setup for querying in a RAG system using the LlamaIndex framework. You can customize each component (retriever, query engine) based on your specific requirements [1][2][3][4].
To continue talking to Dosu, mention @dosu.
from llama_index.
The query configuration for a Retrieval-Augmented Generation (RAG) system using the LlamaIndex framework does not inherently differ in its ability to parse HTML documents with tables and unstructured data compared to the previous example of loading HTML documents. Both configurations can handle unstructured documents, including HTML, and can parse complex documents with tables and unstructured data using LlamaParse, the proprietary document parser provided by LlamaIndex.
Supporting evidence from the document:
- "LlamaIndex can pull in unstructured text, PDFs, Notion and Slack documents and more and index the data within them."
- "LlamaIndex provides advanced indexing/retrieval integrated with LlamaParse, our proprietary document parser."
Additionally, the new document provides an example of how to use the Unstructured library to parse out tables and non-table elements from HTML documents, and then use LlamaIndex's recursive retrieval to index and retrieve these elements as needed. This further confirms that the RAG system configuration can parse HTML documents with tables and unstructured data in the same way as the previous example.
Supporting evidence from the new document:
- "We use Unstructured to parse out the tables, and use LlamaIndex recursive retrieval to index/retrieve tables if necessary given the user question."
- "In these sections we use Unstructured to parse out the table and non-table elements."
- "Now that we've extracted tables and their summaries, we can setup a recursive retriever in LlamaIndex to query these tables."
Thus, the RAG system configuration also parses HTML documents with tables and unstructured data in the same way as the previous example [1][2].
To continue talking to Dosu, mention @dosu.
from llama_index.
from llama_index.
from llama_index.
from llama_index.
Related Issues (20)
- [Feature Request]: Convert TransformQueryEngine to use Workflows
- [Feature Request]: Convert BaseQueryEngine to extend workflows
- [Feature Request]: Convert CitationQueryEngine to use Workflows
- [Feature Request]: Create a WorkflowQueryEngine for custom query engines that wrap existing workflows
- [Feature Request]: Convert CustomQueryEngine to use workflows
- [Feature Request]: Create a stream agent + workflows example
- [Feature Request]: Allow streaming of progress during workflow execution
- [Question]: How can I combine Vector DB and a new query engine? HOT 1
- [Question]: Less context than similarity_top_k HOT 3
- [Feature Request]:
- [Question]: Ingestion Pipelines and Workflows? HOT 2
- [Bug]: poetry add llama-index failing for v0.10.65 HOT 6
- [Bug]: impossible to use PDfReader with an S3 file because of Path() casting HOT 1
- [Bug]: impossible to use PDfReader with an S3 file because of Path() casting HOT 1
- [Question]: How to run HuggingFaceEmbedding on multiple available GPUs? HOT 1
- [Question]: HOT 2
- [Question]: Constructing hybrid indices with Qdrant. HOT 10
- [Bug]: NeptuneAnalyticsPropertyGraphStore incorrectly assigning the embedding to the chunk instead of the entity HOT 2
- [Question]: Data disappear after build vector store with VectorStoreIndex.from_documents() HOT 6
- [Bug]: VectorStoreIndex.build_index_from_nodes() missing 1 required positional argument: 'self' HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llama_index.