Comments (2)
I get the same error using Chrome, and using yarn to install and build.
from flowise.
I have exported the chat workflow to a *.json. Within the *.json there is no indication that azure should be used, here is the *.json of the chat:
"{
"nodes": [
{
"width": 300,
"height": 375,
"id": "markdownTextSplitter_0",
"position": {
"x": 12.788000000000125,
"y": 68.79999999999993
},
"type": "customNode",
"data": {
"id": "markdownTextSplitter_0",
"label": "Markdown Text Splitter",
"version": 1,
"name": "markdownTextSplitter",
"type": "MarkdownTextSplitter",
"baseClasses": [
"MarkdownTextSplitter",
"RecursiveCharacterTextSplitter",
"TextSplitter",
"BaseDocumentTransformer",
"Runnable"
],
"category": "Text Splitters",
"description": "Split your content into documents based on the Markdown headers",
"inputParams": [
{
"label": "Chunk Size",
"name": "chunkSize",
"type": "number",
"default": 1000,
"optional": true,
"id": "markdownTextSplitter_0-input-chunkSize-number"
},
{
"label": "Chunk Overlap",
"name": "chunkOverlap",
"type": "number",
"optional": true,
"id": "markdownTextSplitter_0-input-chunkOverlap-number"
}
],
"inputAnchors": [],
"inputs": {
"chunkSize": "250",
"chunkOverlap": "50"
},
"outputAnchors": [
{
"id": "markdownTextSplitter_0-output-markdownTextSplitter-MarkdownTextSplitter|RecursiveCharacterTextSplitter|TextSplitter|BaseDocumentTransformer|Runnable",
"name": "markdownTextSplitter",
"label": "MarkdownTextSplitter",
"type": "MarkdownTextSplitter | RecursiveCharacterTextSplitter | TextSplitter | BaseDocumentTransformer | Runnable"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 12.788000000000125,
"y": 68.79999999999993
},
"dragging": false
},
{
"width": 300,
"height": 542,
"id": "pdfFile_0",
"position": {
"x": 457.0880000000001,
"y": -82.16799999999999
},
"type": "customNode",
"data": {
"id": "pdfFile_0",
"label": "Pdf File",
"version": 1,
"name": "pdfFile",
"type": "Document",
"baseClasses": [
"Document"
],
"category": "Document Loaders",
"description": "Load data from PDF files",
"inputParams": [
{
"label": "Pdf File",
"name": "pdfFile",
"type": "file",
"fileType": ".pdf",
"id": "pdfFile_0-input-pdfFile-file"
},
{
"label": "Usage",
"name": "usage",
"type": "options",
"options": [
{
"label": "One document per page",
"name": "perPage"
},
{
"label": "One document per file",
"name": "perFile"
}
],
"default": "perPage",
"id": "pdfFile_0-input-usage-options"
},
{
"label": "Use Legacy Build",
"name": "legacyBuild",
"type": "boolean",
"optional": true,
"additionalParams": true,
"id": "pdfFile_0-input-legacyBuild-boolean"
},
{
"label": "Metadata",
"name": "metadata",
"type": "json",
"optional": true,
"additionalParams": true,
"id": "pdfFile_0-input-metadata-json"
}
],
"inputAnchors": [
{
"label": "Text Splitter",
"name": "textSplitter",
"type": "TextSplitter",
"optional": true,
"id": "pdfFile_0-input-textSplitter-TextSplitter"
}
],
"inputs": {
"textSplitter": "{{markdownTextSplitter_0.data.instance}}",
"usage": "perPage",
"legacyBuild": "",
"metadata": ""
},
"outputAnchors": [
{
"id": "pdfFile_0-output-pdfFile-Document",
"name": "pdfFile",
"label": "Document",
"type": "Document"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 457.0880000000001,
"y": -82.16799999999999
},
"dragging": false
},
{
"width": 300,
"height": 422,
"id": "openAIEmbeddings_0",
"position": {
"x": 455.27408532798626,
"y": 550.5699950569399
},
"type": "customNode",
"data": {
"id": "openAIEmbeddings_0",
"label": "OpenAI Embeddings",
"version": 2,
"name": "openAIEmbeddings",
"type": "OpenAIEmbeddings",
"baseClasses": [
"OpenAIEmbeddings",
"Embeddings"
],
"category": "Embeddings",
"description": "OpenAI API to generate embeddings for a given text",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": [
"openAIApi"
],
"id": "openAIEmbeddings_0-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "options",
"options": [
{
"label": "text-embedding-3-large",
"name": "text-embedding-3-large"
},
{
"label": "text-embedding-3-small",
"name": "text-embedding-3-small"
},
{
"label": "text-embedding-ada-002",
"name": "text-embedding-ada-002"
}
],
"default": "text-embedding-ada-002",
"optional": true,
"id": "openAIEmbeddings_0-input-modelName-options"
},
{
"label": "Strip New Lines",
"name": "stripNewLines",
"type": "boolean",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-stripNewLines-boolean"
},
{
"label": "Batch Size",
"name": "batchSize",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-batchSize-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-basepath-string"
}
],
"inputAnchors": [],
"inputs": {
"modelName": "text-embedding-ada-002",
"stripNewLines": "",
"batchSize": "",
"timeout": "",
"basepath": ""
},
"outputAnchors": [
{
"id": "openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings",
"name": "openAIEmbeddings",
"label": "OpenAIEmbeddings",
"type": "OpenAIEmbeddings | Embeddings"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 455.27408532798626,
"y": 550.5699950569399
},
"dragging": false
},
{
"width": 300,
"height": 404,
"id": "memoryVectorStore_0",
"position": {
"x": 886.496,
"y": 540.7759999999998
},
"type": "customNode",
"data": {
"id": "memoryVectorStore_0",
"label": "In-Memory Vector Store",
"version": 1,
"name": "memoryVectorStore",
"type": "Memory",
"baseClasses": [
"Memory",
"VectorStoreRetriever",
"BaseRetriever"
],
"category": "Vector Stores",
"description": "In-memory vectorstore that stores embeddings and does an exact, linear search for the most similar embeddings.",
"inputParams": [
{
"label": "Top K",
"name": "topK",
"description": "Number of top results to fetch. Default to 4",
"placeholder": "4",
"type": "number",
"optional": true,
"id": "memoryVectorStore_0-input-topK-number"
}
],
"inputAnchors": [
{
"label": "Document",
"name": "document",
"type": "Document",
"list": true,
"optional": true,
"id": "memoryVectorStore_0-input-document-Document"
},
{
"label": "Embeddings",
"name": "embeddings",
"type": "Embeddings",
"id": "memoryVectorStore_0-input-embeddings-Embeddings"
}
],
"inputs": {
"document": [
"{{pdfFile_0.data.instance}}"
],
"embeddings": "{{openAIEmbeddings_0.data.instance}}",
"topK": ""
},
"outputAnchors": [
{
"name": "output",
"label": "Output",
"type": "options",
"options": [
{
"id": "memoryVectorStore_0-output-retriever-Memory|VectorStoreRetriever|BaseRetriever",
"name": "retriever",
"label": "Memory Retriever",
"type": "Memory | VectorStoreRetriever | BaseRetriever"
},
{
"id": "memoryVectorStore_0-output-vectorStore-Memory|VectorStore",
"name": "vectorStore",
"label": "Memory Vector Store",
"type": "Memory | VectorStore"
}
],
"default": "retriever"
}
],
"outputs": {
"output": "retriever"
},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 886.496,
"y": 540.7759999999998
},
"dragging": false
},
{
"width": 300,
"height": 478,
"id": "conversationalRetrievalQAChain_0",
"position": {
"x": 1298.624,
"y": 424.13599999999997
},
"type": "customNode",
"data": {
"id": "conversationalRetrievalQAChain_0",
"label": "Conversational Retrieval QA Chain",
"version": 2,
"name": "conversationalRetrievalQAChain",
"type": "ConversationalRetrievalQAChain",
"baseClasses": [
"ConversationalRetrievalQAChain",
"BaseChain",
"Runnable"
],
"category": "Chains",
"description": "Document QA - built on RetrievalQAChain to provide a chat history component",
"inputParams": [
{
"label": "Return Source Documents",
"name": "returnSourceDocuments",
"type": "boolean",
"optional": true,
"id": "conversationalRetrievalQAChain_0-input-returnSourceDocuments-boolean"
},
{
"label": "Rephrase Prompt",
"name": "rephrasePrompt",
"type": "string",
"description": "Using previous chat history, rephrase question into a standalone question",
"warning": "Prompt must include input variables: {chat_history} and {question}",
"rows": 4,
"additionalParams": true,
"optional": true,
"default": "Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone Question:",
"id": "conversationalRetrievalQAChain_0-input-rephrasePrompt-string"
},
{
"label": "Response Prompt",
"name": "responsePrompt",
"type": "string",
"description": "Taking the rephrased question, search for answer from the provided context",
"warning": "Prompt must include input variable: {context}",
"rows": 4,
"additionalParams": true,
"optional": true,
"default": "I want you to act as a document that I am having a conversation with. Your name is "AI Assistant". Using the provided context, answer the user's question to the best of your ability using the resources provided.\nIf there is nothing in the context relevant to the question at hand, just say "Hmm, I'm not sure" and stop after that. Refuse to answer any question not about the info. Never break character.\n------------\n{context}\n------------\nREMEMBER: If there is no relevant information within the context, just say "Hmm, I'm not sure". Don't try to make up an answer. Never break character.",
"id": "conversationalRetrievalQAChain_0-input-responsePrompt-string"
}
],
"inputAnchors": [
{
"label": "Chat Model",
"name": "model",
"type": "BaseChatModel",
"id": "conversationalRetrievalQAChain_0-input-model-BaseChatModel"
},
{
"label": "Vector Store Retriever",
"name": "vectorStoreRetriever",
"type": "BaseRetriever",
"id": "conversationalRetrievalQAChain_0-input-vectorStoreRetriever-BaseRetriever"
},
{
"label": "Memory",
"name": "memory",
"type": "BaseMemory",
"optional": true,
"description": "If left empty, a default BufferMemory will be used",
"id": "conversationalRetrievalQAChain_0-input-memory-BaseMemory"
}
],
"inputs": {
"model": "{{chatOpenAI_0.data.instance}}",
"vectorStoreRetriever": "{{memoryVectorStore_0.data.instance}}",
"memory": "",
"returnSourceDocuments": true,
"rephrasePrompt": "Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone Question:",
"responsePrompt": "I want you to act as a document that I am having a conversation with. Your name is "AI Assistant". Using the provided context, answer the user's question to the best of your ability using the resources provided.\nIf there is nothing in the context relevant to the question at hand, just say "Hmm, I'm not sure" and stop after that. Refuse to answer any question not about the info. Never break character.\n------------\n{context}\n------------\nREMEMBER: If there is no relevant information within the context, just say "Hmm, I'm not sure". Don't try to make up an answer. Never break character."
},
"outputAnchors": [
{
"id": "conversationalRetrievalQAChain_0-output-conversationalRetrievalQAChain-ConversationalRetrievalQAChain|BaseChain|Runnable",
"name": "conversationalRetrievalQAChain",
"label": "ConversationalRetrievalQAChain",
"type": "ConversationalRetrievalQAChain | BaseChain | Runnable"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 1298.624,
"y": 424.13599999999997
},
"dragging": false
},
{
"width": 300,
"height": 572,
"id": "chatOpenAI_0",
"position": {
"x": 931.4240000000002,
"y": -71.7999999999999
},
"type": "customNode",
"data": {
"id": "chatOpenAI_0",
"label": "ChatOpenAI",
"version": 3,
"name": "chatOpenAI",
"type": "ChatOpenAI",
"baseClasses": [
"ChatOpenAI",
"BaseChatModel",
"BaseLanguageModel",
"Runnable"
],
"category": "Chat Models",
"description": "Wrapper around OpenAI large language models that use the Chat endpoint",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": [
"openAIApi"
],
"id": "chatOpenAI_0-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "options",
"options": [
{
"label": "gpt-4",
"name": "gpt-4"
},
{
"label": "gpt-4-turbo-preview",
"name": "gpt-4-turbo-preview"
},
{
"label": "gpt-4-0125-preview",
"name": "gpt-4-0125-preview"
},
{
"label": "gpt-4-1106-preview",
"name": "gpt-4-1106-preview"
},
{
"label": "gpt-4-vision-preview",
"name": "gpt-4-vision-preview"
},
{
"label": "gpt-4-0613",
"name": "gpt-4-0613"
},
{
"label": "gpt-4-32k",
"name": "gpt-4-32k"
},
{
"label": "gpt-4-32k-0613",
"name": "gpt-4-32k-0613"
},
{
"label": "gpt-3.5-turbo",
"name": "gpt-3.5-turbo"
},
{
"label": "gpt-3.5-turbo-1106",
"name": "gpt-3.5-turbo-1106"
},
{
"label": "gpt-3.5-turbo-0613",
"name": "gpt-3.5-turbo-0613"
},
{
"label": "gpt-3.5-turbo-16k",
"name": "gpt-3.5-turbo-16k"
},
{
"label": "gpt-3.5-turbo-16k-0613",
"name": "gpt-3.5-turbo-16k-0613"
}
],
"default": "gpt-3.5-turbo",
"optional": true,
"id": "chatOpenAI_0-input-modelName-options"
},
{
"label": "Temperature",
"name": "temperature",
"type": "number",
"step": 0.1,
"default": 0.9,
"optional": true,
"id": "chatOpenAI_0-input-temperature-number"
},
{
"label": "Max Tokens",
"name": "maxTokens",
"type": "number",
"step": 1,
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-maxTokens-number"
},
{
"label": "Top Probability",
"name": "topP",
"type": "number",
"step": 0.1,
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-topP-number"
},
{
"label": "Frequency Penalty",
"name": "frequencyPenalty",
"type": "number",
"step": 0.1,
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-frequencyPenalty-number"
},
{
"label": "Presence Penalty",
"name": "presencePenalty",
"type": "number",
"step": 0.1,
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-presencePenalty-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"step": 1,
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-basepath-string"
},
{
"label": "BaseOptions",
"name": "baseOptions",
"type": "json",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-baseOptions-json"
}
],
"inputAnchors": [
{
"label": "Cache",
"name": "cache",
"type": "BaseCache",
"optional": true,
"id": "chatOpenAI_0-input-cache-BaseCache"
}
],
"inputs": {
"cache": "",
"modelName": "gpt-4-turbo-preview",
"temperature": 0.9,
"maxTokens": "",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"basepath": "",
"baseOptions": ""
},
"outputAnchors": [
{
"id": "chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel|Runnable",
"name": "chatOpenAI",
"label": "ChatOpenAI",
"type": "ChatOpenAI | BaseChatModel | BaseLanguageModel | Runnable"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 931.4240000000002,
"y": -71.7999999999999
},
"dragging": false
}
],
"edges": [
{
"source": "markdownTextSplitter_0",
"sourceHandle": "markdownTextSplitter_0-output-markdownTextSplitter-MarkdownTextSplitter|RecursiveCharacterTextSplitter|TextSplitter|BaseDocumentTransformer|Runnable",
"target": "pdfFile_0",
"targetHandle": "pdfFile_0-input-textSplitter-TextSplitter",
"type": "buttonedge",
"id": "markdownTextSplitter_0-markdownTextSplitter_0-output-markdownTextSplitter-MarkdownTextSplitter|RecursiveCharacterTextSplitter|TextSplitter|BaseDocumentTransformer|Runnable-pdfFile_0-pdfFile_0-input-textSplitter-TextSplitter"
},
{
"source": "pdfFile_0",
"sourceHandle": "pdfFile_0-output-pdfFile-Document",
"target": "memoryVectorStore_0",
"targetHandle": "memoryVectorStore_0-input-document-Document",
"type": "buttonedge",
"id": "pdfFile_0-pdfFile_0-output-pdfFile-Document-memoryVectorStore_0-memoryVectorStore_0-input-document-Document"
},
{
"source": "openAIEmbeddings_0",
"sourceHandle": "openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings",
"target": "memoryVectorStore_0",
"targetHandle": "memoryVectorStore_0-input-embeddings-Embeddings",
"type": "buttonedge",
"id": "openAIEmbeddings_0-openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings-memoryVectorStore_0-memoryVectorStore_0-input-embeddings-Embeddings"
},
{
"source": "memoryVectorStore_0",
"sourceHandle": "memoryVectorStore_0-output-retriever-Memory|VectorStoreRetriever|BaseRetriever",
"target": "conversationalRetrievalQAChain_0",
"targetHandle": "conversationalRetrievalQAChain_0-input-vectorStoreRetriever-BaseRetriever",
"type": "buttonedge",
"id": "memoryVectorStore_0-memoryVectorStore_0-output-retriever-Memory|VectorStoreRetriever|BaseRetriever-conversationalRetrievalQAChain_0-conversationalRetrievalQAChain_0-input-vectorStoreRetriever-BaseRetriever"
},
{
"source": "chatOpenAI_0",
"sourceHandle": "chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel|Runnable",
"target": "conversationalRetrievalQAChain_0",
"targetHandle": "conversationalRetrievalQAChain_0-input-model-BaseChatModel",
"type": "buttonedge",
"id": "chatOpenAI_0-chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel|Runnable-conversationalRetrievalQAChain_0-conversationalRetrievalQAChain_0-input-model-BaseChatModel"
}
]
}
"
from flowise.
Related Issues (20)
- [BUG] Replit setup error due to depedency conflict
- [HELP!!!!!] I set up a proxy in the container where Flowise is deployed, but Flowise still times out when accessing huggingface inference api.
- Llamaindex Azure - Unkown Model [BUG]
- [Question] Differences among Conversational Retrieval QA to Conversational Retrieval Agent HOT 11
- [bug]It doesn't work when I add " const fs = require('fs');" to my custom tool HOT 4
- [BUG] ORM Fails to Specify Vector Dimensions for RAG Table in PostgreSQL
- [BUG] The Chain Tool is not working with the Conversational Retrieval QA Chain HOT 6
- [FEATURE] Ability to automatically send a question from another app when opening the UI
- [BUG] Data too long for column 'group_id' at row 1 using MySQL
- How to upsert a pdf file in particular collection and then retrieving answers specifically from it HOT 2
- [BUG] While using Mistral models Flowise answers "forbidden" HOT 1
- Anything LLM
- Need GPT-4o access HOT 4
- [BUG] LocalAI Embedding and Upsert Vector Store The request protocol is different
- [BUG] - overrideConfig not passed to Langsmith HOT 4
- React Native Chatbot
- Error when Upserting
- Need Dashboard
- Free Flowise Deployment HOT 2
- Output Schema missing in Custom Tool[BUG] HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from flowise.