Git Product home page Git Product logo

sublayerapp / sublayer Goto Github PK

View Code? Open in Web Editor NEW
48.0 5.0 1.0 101 KB

A model-agnostic Ruby Generative AI DSL and framework. Provides base classes for building Generators, Actions, Tasks, and Agents that can be used to build AI powered applications in Ruby.

Home Page: https://docs.sublayer.com

License: MIT License

Ruby 99.46% Shell 0.54%
ai dsl ruby agents ai-agents ai-agents-framework

sublayer's Introduction

Sublayer

A model-agnostic Ruby AI Agent framework. Provides base classes for building Generators, Actions, Tasks, and Agents that can be used to build AI powered applications in Ruby.

For more detailed documentation visit our documentation site: https://docs.sublayer.com.

Note on Versioning

Pre-1.0 we anticipate many breaking changes to the API. Our current plan is to keep breaking changes to minor, 0.x releases, and patch releases (0.x.y) will be used for new features and bug fixes.

To maintain stability in your application, we recommend pinning the version of Sublayer in your Gemfile to a specific minor version. For example, to pin to version 0.0.x, you would add the following line to your Gemfile:

gem 'sublayer', '~> 0.0'

Installation

Install the gem by running the following commands:

$ gem install sublayer

Or add this line to your application's Gemfile:

gem 'sublayer', '~> 0.0'

Choose your AI Model

Sublayer is model-agnostic and can be used with any AI model. Below are the

OpenAI (Default)

Expects you to have an OpenAI API key set in the OPENAI_API_KEY environment variable.

Visit OpenAI to get an API key.

Usage:

Sublayer.configuration.ai_provider = Sublayer::Providers::OpenAI
Sublayer.configuration.ai_model = "gpt-4-turbo-preview"

Gemini

Expects you to have a Gemini API key set in the GEMINI_API_KEY environment variable.

Visit Google AI Studio to get an API key.

Usage:

Sublayer.configuration.ai_provider = Sublayer::Providers::Gemini
Sublayer.configuration.ai_model = "gemini-pro"

Claude

Expect you to have a Claude API key set in the ANTHROPIC_API_KEY environment variable.

Visit Anthropic to get an API key.

Usage:

Sublayer.configuration.ai_provider = Sublayer::Providers::Claude
Sublayer.configuration.ai_model ="claude-3-opus-20240229"

Groq

Expects you to have a Groq API key set in the GROQ_API_KEY environment variable.

Visit Groq Console to get an API key.

Usage:

Sublayer.configuration.ai_provider = Sublayer::Providers::Groq
Sublayer.configuration.ai_model = "mixtral-8x7b-32768"

Local

If you've never run a local model before see the Local Model Quickstart below. Know that local models take several GB of space.

The model you use must have the ChatML formatted v1/chat/completions endpoint to work with sublayer (many models do by default)

Usage:

Run your local model on http://localhost:8080 and then set:

Sublayer.configuration.ai_provider = Sublayer::Providers::Local
Sublayer.configuration.ai_model = "LLaMA_CPP"

Local Model Quickstart:

Instructions to run a local model

  1. Setting up Llamafile
cd where/you/keep/your/projects
git clone [email protected]:Mozilla-Ocho/llamafile.git
cd llamafile

Download: https://cosmo.zip/pub/cosmos/bin/make (windows users need this too: https://justine.lol/cosmo3/)

# within llamafile directory
chmod +x path/to/the/downloaded/make
path/to/the/downloaded/make -j8
sudo path/to/the/downloaded/make install PREFIX=/usr/local

You can now run llamfile

  1. Downloading Model

click here to download Mistral_7b.Q5_K_M (5.13 GB)

  1. Running Llamafile with a model
llamafile -ngl 9999 -m path/to/the/downloaded/Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf --host 0.0.0.0 -c 4096

You are now running a local model on http://localhost:8080

Recommended Settings for Apple M1 users:

llamafile -ngl 9999 -m Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf --host 0.0.0.0 --nobrowser -c 2048 --gpu APPLE -t 12

run sysctl -n hw.logicalcpu to see what number to give the -t threads option

Concepts

Generators

Generators are responsible for generating specific outputs based on input data. They focus on a single generation task and do not perform any actions or complex decision-making. Generators are the building blocks of the Sublayer framework.

Examples (in the /lib/sublayer/generators/examples directory):

  • CodeFromDescriptionGenerator: Generates code based on a description and the technologies used.
  • DescriptionFromCodeGenerator: Generates a description of the code passed in to it.
  • CodeFromBlueprintGenerator: Generates code based on a blueprint, a blueprint description, and a description of the desired code.

Actions (Coming Soon)

Actions are responsible for performing specific operations to get inputs for a Generator or based on the generated output from a Generator. They encapsulate a single action and do not involve complex decision-making. Actions are the executable units that bring the generated inputs to life.

Examples:

  • SaveToFileAction: Saves generated output to a file.
  • RunCommandLineCommandAction: Runs a generated command line command.

Tasks (Coming Soon)

Tasks combine Generators and Actions to accomplish a specific goal. They involve a sequence of generation and action steps that may include basic decision-making and flow control. Tasks are the high-level building blocks that define the desired outcome.

Examples:

  • ModifyFileContentsTask: Generates new file contents based on the existing contents and a set of rules, and then saves the new contents to the file.

Agents (Coming Soon)

Agents are high-level entities that coordinate and orchestrate multiple Tasks to achieve a broader goal. They involve complex decision-making, monitoring, and adaptation based on the outcomes of the Tasks. Agents are the intelligent supervisors that manage the overall workflow.

Examples:

  • CustomerSupportAgent: Handles customer support inquiries by using various Tasks such as understanding the customer's issue, generating appropriate responses, and performing actions like sending emails or creating support tickets.

Usage Examples

There are sample Generators in the /examples/ directory that demonstrate how to build generators using the Sublayer framework. Alternatively below are links to open source projects that are using generators in different ways:

  • Blueprints - An open source AI code assistant that allows you to capture patterns in your codebase to use as a base for generating new code.

  • Clag - A ruby gem that generates command line commands from a simple description right in your terminal.

Development

TBD

Contributing

TBD

sublayer's People

Contributors

andrewbkang avatar drnic avatar swerner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

drnic

sublayer's Issues

The current pattern for Sublayer::Task doesn't generate well with Blueprints

Blueprints is very effective at generating Sublayer::Generators and Sublayer::Actions but the pattern I've been exploring with Sublayer::Tasks doesn't seem to work very well.

I tried to create a blueprint of the MakeRspecTestsPassTask here and generate a few variations, but it didn't jump to using and chaining actions and generators together.

Working to brainstorm ways to get it to reliably create new tasks and hallucinate the Sublayer::Actions and Sublayer::Generators it would need to complete the task...

Supporting native claude tools

https://docs.anthropic.com/claude/docs/tool-use

Claude now has native tools/functions. Theoretically we could rip out the Providers::Claude implementation and rewrite it to look similarly to the ::OpenAi provider.

Except the schema of claude3 tools looks different to openai functions.

Suggestion: OutputProviders rename to_hash to to_openai_hash; and then we add to_claude3_hash to be used for Claude provider? Fingers crossed there isn't a distinct tool/function OutputProvider schema for every LLM. My suggestion won't last the test of time.

Mechanism to use a different model from the same provider

For example, being able to optionally choose between using gpt-3.5 for something, using gpt-4-turbo for others, or between using claude-haiku for some and claude-opus for others if you want, but for the majority of cases being able to mostly rely on the defaults.

Multi-model providers break the universal function calling expectation inside providers

For model providers like Groq, Local, OpenRouter, etc, each different model you might want to use inside these different services could have different function calling/tool calling mechanism and quirks: Specifically I just tried generating something with Groq+llama3 and the xml response came back in a different format.

For now, going to move away from providing direct solutions for things like Groq and make it similar to what we're doing with output_adapters where you can specify a custom AI provider in your code to use and implement the specific quirks of that model. Over time we'll find patterns that work with things like Llama3 or Mistral or Hermes2 that we can provide as bases.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.