Git Product home page Git Product logo

packages's Introduction

UCI Packages Repository

This repository holds various packages/components that collectively form the Unified Communications Interface (UCI), providing a versatile and extensible solution for building conversational interfaces and chatbots.

The UCI Packages Repository is a collection of modular components designed to provide a comprehensive and flexible framework for developing conversational interfaces. Each package addresses a specific aspect of the UCI ecosystem, allowing for easy extensibility and customization.

Feel free to explore each package's documentation for detailed information on usage, configuration, and contribution guidelines. By combining these packages, developers can build powerful and adaptable conversational applications using the Unified Conversational Interface.

This repository is used directly by inbound to provide APIs that seamlessly integrate the adapters present here.

The repository is organized into the following directories:

packages/xmessage

The xmessage directory contains specifications for the xmessage type, a foundational structure used extensively within UCI services. xmessage serves as a common and fluid type, allowing seamless conversion of diverse data within the UCI ecosystem. Read more about xmessage here.

packages/adapters

The adapters directory hosts plugin-like code responsible for enabling UCI to interact with multiple channels and providers. These adapters facilitate integration with various platforms, including but not limited to Gupshup, WhatsApp, Telegram Bot, Nodemailer, PWA, and more. Read more about adapters and how to create new adapters here.

packages/transformers

In the transformers directory, you'll find components that work on the xstate library. These transformers govern the flow of a bot, applying rules and generating results based on prompts. They play a crucial role in orchestrating conversations within the UCI framework. Read more about transformers here.

packages's People

Contributors

amruth-vamshi avatar chinmoy12c avatar prtkjakhar avatar ryanwalker277 avatar singhalkarun avatar

Watchers

 avatar  avatar

packages's Issues

Clean up transformers and add unit tests

  • Define a simplistic interface based on this architecture.

Image

  • Figure out how xstate would link transformers together, without hardwiring internal flow logic.

  • #25
    Instead, these should be separate transformers and xstate def should be as simple as:
    RUN: T1 -> T2 -> T3 -> END
    Instead of saying:
    RUN: T1(function A, check condition B, function C) -> T2 (function C) -> T3 (function X, check condition B, check condition C, function Z).

  • Hide internal code logic from xstate. As an example this is a portion of xstate currently:

"searchSimilarChunks": {
            "invoke": {
                "src": "searchSimilarChunks",
                "onDone": [
                    {
                        "cond": "ifSimilarChunksFound",
                        "target": "llm",
                        "actions": [
                            "searchSimilarChunksRecordResponse"
                        ]
                    },
                    {
                        "target": "end",
                        "actions": [
                            "setNoContextOutput"
                        ]
                    }
                ],
                "onError": "handleError"
            }
        }
    },

The xstate directly works on "pure functions" exposed by each of the transformers, this needs to be refactored to only expose generic functionalities "without code level details", for this to seamlessly integrate with flowise.

  • If possible move to class/interface based code instead of functional code.
  • Add unit test which injects xstate and verifies function of transformers. (Maybe add boilerplate classes for other transformers to use as well.)

Decouple dependency on outbound service when streaming in enabled

Problem Statement

  1. Currently some transformers, such as LLM, use outboundUrl directly when enableStream is true. This breaks the essence of the flow of state machine,

    which follows the rule:

    inbound -> orchestrator -> go through multiple transformers based on state machine -> send final message to outbound once state machine ends on orchestrator

    and instead what happens is:

    inbound -> orchestrator -> go through transformer and if streaming is enabled, break the chain and send message directly to outbound

    We need a way to handle this streaming without breaking the state machine.

  2. Another problem that arises as a direct effect of this is the output of the transformer cannot be consumed further when streaming is enabled. For example, let's say an LLM generates a response in English and the response needs to pass through another TRANSLATE transformer before being sent to user, with the current setup, this is not possible. Since enabling streaming would directly send the data to user after the first LLM generates a response, the TRANSLATE transformer never has the chance to receive the output.

Reference code

You can take reference from the exact point in the code due to which this issue arises here:

Bhasai Enhancements

Based on AKAI exercise this is what we have come up with to enhance the experience and usage of system:

  • UI change for setters. Instead of accepting JSON directly, we should have nested forms which is much cleaner.
  • A tool/way to debug flowise states.
    • Provide an input and get an output
    • Every transformer input/output state should be visible
    • Should also include time taken by transformers
  • Allow draft recipes
  • Code highlighter
  • Create an if-else transformer for checking existence of value.
  • Create a transformer for creating different message types.
  • Create a switch case transformer to output message types.
  • Fix flakiness of classifiers.
  • Color code transformer types.
  • Have a human-readable label for all nodes.
  • Have a search for nodes by label, name etc. in the diagram. Zoom on click after search.

Classification (Model) Transformer

  • Generic Huggingface/AI Tools Classification Transformer => Class (append that to xMessage)
    • (Class <> Label <> Cutoff) Mapping
{
    class: [
        "class": "seed",
        "label": "LABEL_0",
        "cutoff": ""
     ],
    config: {
        SuppressedFields : [ "","" ..],  
        ExistingLabel :  "string"
    }
}

Output

[
    [
        {
            "label": "LABEL_0",
            "score": 0.4173952639102936
        },
        {
            "label": "LABEL_2",
            "score": 0.3968888521194458
        },
        {
            "label": "LABEL_1",
            "score": 0.18571589887142181
        }
    ]
]

fn() => seed

Clean Up Translation/LLM Transformer

Currently due to streaming blocker, the LLM and Transformer code is messy and not independent. Code needs to be cleaned up once streaming is figured out.

Transformer Expose metadata and config

Transformers need to expose metaData about rendering and config parameters externally, so it can be used by frontend to render corresponding nodes.

pick config.spec.json, metadata.json from every folder of transformer. Metadata - name (displayName), class, type, desc, version (latest, 0.1, etc)

Transformer Registry Marketplace

  • Design (HLD, LLD)
  • Break this down to C4GT Tickets.
  • Create a marketplace for Transformers and Adapters.
  • Create automatically generated docs for transformer types based on json config.
  • Create a guide for specific use cases for transformer instances.
  • Refactor types as a package from transformers.

Create a pause/restore Transformer

Create a transformer that is capable of pausing the state machine in a flow based on a condition and restore the state from that point point.

Vistaar Integration

I went through what Vistaar currently has.

What exists

  • Right now the playground can be found here. It currently has only Apurva as a provider. Allows for a natural language search which happens on the frontend based on the cached data (all responses)
  • Vistaar has an AI Services Pack which allows for extending VISTAAR with AI. I would explain it as parsing the intent and broadcasting it to the network. I am unclear on if this part of the Vistaar Network or will it be separate as this is an NP-Hard problem statement and cannot be solved immediately. But good to see a framework for it and that they are planning to include Natural Language Queries into the mix.
  • The network specifications and details highlights 2 use cases that are 1. Discovery and Fulfilment of “Content” and 2. Discovery and Fulfilment of Field Level Extension Workers skilling content. The specification is broad enough to include an NLQ and resolve it through any provider. Also the content specification is broad enough to ensure it covers - a single message, a string of messages, messages with URL (which can be used for audio). Which is most of what any chat application would have.

Options for Bharat Sahaiyak

  1. Low hanging 1- Build the specification for weather, mandi prices, get them included in vistaar; building a provider for this; We don't get into Seeker experience for this.
  2. Low hanging 2 - Pass the entire query of on type (say weather or mandi prices) through the network and get a response directly. Provider experience.
  3. Actual Use Case - Build an intent classifier, deploy that as a Vistaar DPI and add API-based access to Vistaar. The boadcasting and everything downstream can be handled by anyone else. This can then be used to do 1 through this DPI too.

Add Transformer Documentation

  • What are transformers
  • Different Classes of transformers
  • Supported types of transformers
  • Creating a new transformer

Refactor existing trasnformers

Transformers 'must' not branch internally into separate types (or else not depend on an external entity to govern which type to use). A single transformer must be mapped to a single type. For example, currently, llm model has has GPT4 and GPT4withLamaIndex. This creates a dependency on xstate to govern the internal flow of a transformer and provide the "which function to call".

Also include :

  • neural coref trasnformers

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.