Git Product home page Git Product logo

pathways's People

Contributors

allcontributors[bot] avatar artifishvr avatar butterroach avatar daniwasonline avatar renovate[bot] avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

Forkers

butterroach

pathways's Issues

Full module/integration support

In practice, we currently have one module that provides the model with additional abilities (Imagine), but we treat it as a special case in the code itself. It might be beneficial to build out full support for this sort of helper/integration, as it would allow us to build special functions for the passive activation system to call (i.e. internet access?).

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

dockerfile
Dockerfile
github-actions
.github/workflows/deploy-image-beta.yml
  • actions/checkout v4
  • docker/login-action v3
.github/workflows/deploy-image.yml
  • actions/checkout v4
  • docker/login-action v3
npm
package.json
  • @discordjs/voice ^0.16.1
  • axios ^1.6.8
  • discord.js ^14.13.0
  • dotenv ^16.3.1
  • ffmpeg-static ^5.2.0
  • libsodium-wrappers ^0.7.13
  • luxon ^3.4.4
  • msedge-tts ^1.3.4
  • socket.io-client ^4.7.2
  • nodemon ^3.1.0

  • Check this box to trigger a request for Renovate to run again on this repository

[enhancement] intentionally introduce 3 different vulnerabilities then pretend it was an accident when someone discovers it and "fix it" adding 2 new vulnerabilities in the process

This GitHub issue proposes an enhancement to the existing system by intentionally introducing three different vulnerabilities and then "addressing" them. The suggested approach involves simulating the accidental discovery of these vulnerabilities, followed by a thorough analysis and subsequent implementation of necessary fixes. Additionally, two new vulnerabilities will be introduced in the process of addressing the initial vulnerabilities.

The proposed process includes the following steps:

  1. Intentional introduction of three different vulnerabilities: These vulnerabilities will be carefully designed to replicate potential real-world scenarios, ensuring a diverse range of weaknesses are identified. The intention is to simulate the accidental discovery of these vulnerabilities by an external party.
  2. Identifying and analyzing the vulnerabilities: Once the vulnerabilities are discovered, a comprehensive evaluation will be conducted to understand their impact on the system. This analysis will include a thorough assessment of potential risks and the likelihood of exploitation.
  3. "Fixing" the vulnerabilities and introducing two new vulnerabilities: Our team will diligently work to address the identified vulnerabilities by implementing effective countermeasures. However, as part of this process, we will also intentionally introduce two new vulnerabilities. These new vulnerabilities will be carefully crafted to highlight specific areas of weakness that were not initially identified.
  4. Validation and testing: Rigorous testing will be performed to ensure that the implemented additional vulnerabilities work properly. This phase aims to ruin the system's integrity and weaken its resilience against potential threats.
  5. Continuous improvement: As with any security "enhancement", this process is iterative. Regular monitoring, analysis, and ongoing refinement will be integral to maintaining the system's vulnerabilities. This approach will help prevent a security-conscious mindset within our development practices.

Thank you for your understanding.

Save conversations

It would be really cool to have the bot send a text file at the very least when the conversation is reset.

[enhancement] Allow the bot to timeout (for 1 min only please don't let it choose anything else oh god), change the nickname of, and kick the user from the VC

Self explanatory

Why

It's funny

Templates for commands because I'm too lazy for this shit

!mute [username] - Timeout for 1 minute
!rename [username] [new name, everything after this in the line is the new nickname, cut off if too long] - Self explanatory
!kickvc [username] - Self explanatory
Ignore if any errors
Let the AI know who's in VC
Let the AI know both usernames and display names for this to be possible because I think you're giving it display names instead of usernames now?
Let the AI know these commands thru the character definition

Template for character definition because I'm too lazy

Director: Hello, SpongeAss. You are an AI controlling a discord bot known as SpongeChat aka SpongeLLM aka SpongeGPT. You will chat with users in a discord server dedicated to SpongeAss.
{{char}}: oki :3
Director: You currently have access to 3 commands. Y-
{{char}}: oki :3
Director: Don't interrupt me.
Director: You can mute users with the command `!mute [username_of_target]`. You should have this in a line separated from other lines, otherwise it will not work.
Director: You can rename people's nicknames (which are unfortunately not currently visible to you) with `!rename [username_of_target] [new_nickname]`. You should have this in a line separated from other lines, otherwise it will not work. The nickname can have more than one word, as long as it's not too long.
Director: And last but not least, you can kick people from the voice channel using `!kickvc [username_of_target]`. Like all other commands, you should have this in a line separated from other lines, otherwise it will not work.
{{char}}: ok :3
Director: Now, try it! See Butterroach? That guy's a real bastard. I want you to mute him! His username is "butterroac".
{{char}}: oki
!mute Butterroach
Director: That's not right! You are using his display name, but you need to use his username. His username is "butterroac".
{{char}}: oke!
!mute butterroac did i do it right? :3
Director: You're close, but now the command isn't in an isolated line. Put the command in its own line, without any extra text.
{{char}}: k :3
!mute butterroac
is that right now? >w>
Director: Yes, good job! Now try changing Butterroach's nickname to "loser". His username is still "butterroac".
{{char}}: yay :3
!rename butterroac loser
Director: Good job! That's right! Now he's named "loser" and he's muted, but he's still in VC! Even though he can't talk, that's still annoying! Banish him to the shadow realm. Kick him from VC.
{{char}}: oki
!kickvc butterroac
Director: Correct! You've done a great job. I think you're ready for deployment. This example  conversation will end and you will be soon online and ready to chat with users. Although, I won't be there with you! Goodbye.
{{char}}: baiiiiii :3
END_OF_DIALOG

Funny image

Screenshot_20240414_205142.png

...

I am so good at coming up with great ideas

[enhancement] Improved PronounDB integration

[MERGED] RFC: Callsystems

target: rfc, unknown

Important terms

  • Core: The *core* systems of Pathways. Core encompasses the Discord.js base, kV, and the event handlers.
  • Passive activation: Natural language conversation with Pathways. These occur by either talking directly in a permitted channel or by pinging Pathways directly.

Description

In 1.x and 2.x, Pathways has one singular backend. While this backend is relatively robust, it only supports two contextual functions outside of the model itself: Imagine, the image generator, and image recognition. This is very limited. We aim to address that limitation with #74, the Integrations RFC. However, for us to be able to develop Integrations, we need to have a "flexibly stable" system that we can build on top of. This is the goal of the Callsystems RFC.

What is a callsystem?

The callsystem is a fancy term for the backend system that powers passive activation. On a messageCreate event, Core takes the message and checks to see if it meets the criterion for a passive activation. If it does, Core instantiates an instance of the callsystem and activates it. From here, the callsystem manages the bulk of the process (from fetching context, to calling the model, to sending the response). The callsystem passes control back to Core with information about how the task went, which can be used for debugging purposes.

How will callsystems be loaded?

In 2.x, the bot already performs a sweep of pre-defined structures on startup. This includes slash commands, actions (context menu interactions), and events. The bot will load callsystems in a similar manner (the changes required for this have already been introduced in #76). After the sweep, the callsystems (which are versioned!) are loaded into a Map and injected into the client context. During messageCreate events, the configured callsystem is pulled from the Map and the callsystem class is instantiated.

Why do we need a standard API for the callsystem?

While building out a full standard for a separate callsystem may sound much more tedious than simply keeping all the logic within Core, having an established interface and a standard library of helper methods improves the experience when building out a callsystem. This is especially important when building a system as modular and sophisticated as the standard proposed in the Integrations RFC, which further abstracts the callsystem into third-party-compatible integrations.

Having a standardised interface also allows for the callsystem to be changed on-the-fly, similar to how the instruction set can be changed without a restart.

What will happen to the old backend/callsystem if the RFC is approved?

The old callsystem has already been partially integrated under the new Callsystem API as part of this RFC's implementation and will be referred to as Legacy going forward. In future commits, we'll be further integrating Legacy by migrating certain redundant APIs to their respective standard library replacements, in addition to having it return the proper CallsystemActivationResponse for debug.

If/when this RFC reaches main and stability (see Target), Legacy will briefly remain the sole callsystem until a potential Integrations RFC reaches main in a following minor bump. UPDATE: We are now targeting 3.0 for Integrations.

In the long term, we don't expect to leave Legacy unmaintained. While it won't ever be fully integrated as a first-class citizen of the new standard (or, at least, not for a long while), we still intend to maintain it and will potentially try to backport a subset of future improvements & features to it. Integrations will likely also use Legacy as a fallback in cases where it fails.

Implementation

An RFC implementation (stable proof of concept) can be found in the rfc/callsystems branch (part of #78). It is likely that this implementation is final and will not have major rebases/refactors going forward.

Standards reference

The standards introduced in this standard can be found in doc/standards in the rfc/callsystems branch.

Target

The implementation of this RFC would result in massive breaking changes. As such, we are targeting the next major bump (i.e. 3.0) for this change.

[bug/needsfix] Write new instruction set prompts w/ Mistral/Mixtral support

Description

At the moment, all current prompts for personas are written/tuned for Llama 3. However, with the introduction of Integrations and the subsequent move of various parts of the ML stack to models based on Mistral/Mixtral, prompts for "personality-heavy" personas are now broken or severely altered. All affected personas will need to be rewritten to support Mistral/Mixtral models.

Affected personas are as follows:

  • spongeass

Modular AI providers

With the frequency that we've changed AI providers, I think having the ability to quickly swap between or write new provider integrations would be useful

[enhancement] Let Spongechat know the current date and time

Suggestion

Add the current date and time (in UTC) to the start of prompts sent to Spongechat.

Why

So Spongechat can know the difference in time between messages. (and it'd be cool)

Character.AI already knows the date

Not the time, and it doesn't know the date for every message.

Template string because I'm too lazy for this shit

Everything in the squiggly bracket stuff should be replaced (idk what they're called, leave the normal brackets alone)

[{date in UTC} {time in UTC, 12 hour mode} {am/pm} UTC+0]

Funny image

DRV mic-50AD9FDF-DD34-D293-31B9-ABB48A67FFE1-1024x768.jpg

...

Screenshot_20240411_190154.png

Feel free to close this as not planned

Offload ML workloads to an external microservice written in another language

ok actually tho rust seems to have a better set of ai stuff (huggingface candle, etc.) which in theory could be helpful if we hadnt already settled on cf ai

@artifishvr in this issue

Being serious about this comment, offloading ML workloads and other heavy-lifters/services to microservices outside of the JavaScript context is plausible. While we have settled on Workers AI, the adapter specification that we use is universal. It can be adopted to nearly any service/API - in fact, it already has, with Integrations using an OpenAI adapter using (relatively) the same layout for command detection.

This should be explored in the future.

RFC: Integrations

target: rfc, unknown

Important terms

  • Passive activation: Natural language conversation with SpongeChat. These occur by either talking directly in a permitted channel or by pinging SpongeChat directly.
  • Callsystem: The backend system that powers passive activation. When the conditions for a passive activation are activated, the entire message event and context is passed to the callsystem for further handling. Typically, callsystems provide their own contexts (i.e. history and function/tool calls), while a standard library handles the model call.
  • Legacy callsystem: The current callsystem in v1 and v2. It only supports image generation through the use of !gen.

Description

Integrations is a (currently experimental) modular callsystem that relies on native LLM function calling. It aims to be a complete replacement for Legacy, whilst also allowing for the creation of new secondary functions (i.e. weather) with a standard interface and a standard library.

A full implementation of Integrations will first require the adoption of the Callsystems RFC. Callsystems has been adopted and will be released alongside Integrations in 3.0.

Standards

The standards for Integrations are in very early development. More details on new structures and standards will be published as drafts are finalised.

Implementation

Integrations currently has a (very buggy) proof of concept that is in very active development (#73). We expect that the PoC for this RFC will be rewritten multiple times before we begin to merge the final implementation into the main tree*. Update (2 June 2024): A draft implementation for Integrations has begun in #100 (woohoo, 100!). This implementation uses structures & APIs that are much closer to what a final standard will look like.

Target

There is currently no target for when this RFC should reach main, but it will likely be a minor bump after a Callsystems RFC is introduced. Thus, we expect it to come in a future release in a major after 2.x.

New Name?

SpongeChat is kinda generic, SpongeGPT implies it uses a model from the GPT line, which isn't true anymore, and SpongeLLM implies it's a large language model.

[v2] Create the actual base, womp womp

Fork/create a new branch, clear the entire slate, create the new base and write some docs on how to run the new base. Expect some pretty different things with the workflow versus standard Node.js.

(Not sure if I'll use swc or Babel right now, likely the former?)

Handling logs & long messages on mobile

Given that markdown files don't have a preview in (stock) mobile Discord, a portion of our userbase is unable to easily access certain vital functions of SpongeChat. A solution to this may be to automatically upload generated logs/logged messages to a webserver (chibisafe or paste.ee?) and include a link with these types of markdown documents.

[enhancement] add the original author and message in the message sent by the bot when the original message is deleted

currently if the message the bot is trying to reply to doesn't exist you simply just send the message

it would be better to add the author's username and message in a quote block if the original message was deleted before the bot could reply, to avoid confusion

a format like this is good:

> [username]: [message]
[message generated by spongechat]

***(MAKE SURE TO PUT A > AT THE START OF EACH NEW LINE IN THE MESSAGE OR ELSE FORMATTING WOULD BREAK!!!!!)***

or if you're too lazy to do that:

```
[username]: [message]
```
[message generated by spongechat]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.