Git Product home page Git Product logo

Comments (2)

erikaz1 avatar erikaz1 commented on August 10, 2024 1

I like (2) and your reasoning for doability. I'm also working on another project with multimodal embeddings, and will experiment with that.

from fashion-shopping-assistant.

mingxuan-he avatar mingxuan-he commented on August 10, 2024

I just found a cool example from Neo4j (one of the DB providers that James mentioned in Friday's lecture).

Looks like Neo4j allows a data structure called "Vector Index" that can store both tabular features (in our case price, release date, etc) with a vector embedding (in our case the multimodal embedding for product desc + image). Example here:

Neo4j's docs even has code for the exact same use case (searching a DB of online shopping products). We'll need to swap out the embedding from text-only to our own multimodal embeddings model and tweak the schema according to our raw data, but I think it's quite doable.

import neo4j
import langchain.embeddings
import langchain.chat_models
import langchain.prompts.chat

emb = OpenAIEmbeddings() # VertexAIEmbeddings() or BedrockEmbeddings() or ...
llm = ChatOpenAI() # ChatVertexAI() or BedrockChat() or ChatOllama() ...

vector = emb.embed_query(user_input)

vectory_query = """
// find products by similarity search in vector index
CALL db.index.vector.queryNodes('products', 5, $embedding) yield node as product, score

// enrich with additional explicit relationships from the knowledge graph
MATCH (product)-[:HAS_CATEGORY]->(cat), (product)-[:BY_BRAND]->(brand)
MATCH (product)-[:HAS_REVIEW]->(review {rating:5})<-[:WROTE]-(customer) 

// return relevant contextual information
RETURN product.Name, product.Description, brand.Name, cat.Name, 
       collect(review { .Date, .Text })[0..5] as reviews, score
"""

records = neo4j.driver.execute_query(vectory_query, embedding = vector)
context = format_context(records)

template = """
You are a helpful assistant that helps users find information for their shopping needs.
Only use the context provided, do not add any additional information.
Context:  {context}
User question: {question}
"""

chain = prompt(template) | llm

answer = chain.invoke({"question":user_input, "context":context}).content

Seems like they have great support for Langchain's RAG pipelines so it'll be easier to integrate the search function with the LLM agent.

from fashion-shopping-assistant.

Related Issues (8)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.