Git Product home page Git Product logo

rift's People

Contributors

andrew-twigg avatar bentleylong avatar cristianoc avatar edayers avatar eltociear avatar jacksonkearl avatar jeremylan avatar jesse-michael-han avatar jwd-dev avatar kataqatsi avatar pranavmital avatar semorrison avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rift's Issues

CPU usage continues after a completion is cancelled (by clicking on `running`)

When a cancellation event occurs, the underlying streaming process associated with TextStream and GPT4ALL Model continues to run and consume CPU resources until the model has finished. This behavior is undesirable, as it leads to unnecessary resource utilization, and can negatively impact system performance. The streaming process should halt its operation immediately upon receiving a cancellation event to free up system resources.
[written by AI of course]

Generic auto-truncation feature for `run_chat` and `run_helper`

Currently, both run_chat and run_helper error out if the current code window does not fit into context. There should be a generic truncation function/object which accepts the vscode document + cursor position metadata processed by the client in Rift and narrows the context to a window of N tokens around the center of the current window.

This will solve #17 and related issues reported in #rift-support.

Why custom configuration handling?

It seems like the server uses a custom morph/set_model_configrequest for sending configuration from the editor.

Maybe I'm missing something after skimming through the code but I wonder why it's using a custom request and not a standard workspace/didChangeConfiguration and workspace/configuration functionality?

Using a standard LSP functionality would work in all LSP-conforming client implementations while custom requests need some extra code.

Rift executable not found by VSCode extension when running in WSL

I am running on Windows 11 but do all my coding in the WSL and VSCode makes it a flawless experience, completely transparent.

However, when I open the Rift sidebar, I immediately get this error that the extension could not find the rift executable.

image

When I then open the terminal and run rift manually from there (which means the executable is clearly in the PATH, but inside the WSL, not in the Windows host system), it suddenly works and all the rift gizmos come to live.

Not a huge deal since it does work after running it manually, just a small annoyance that seems unnecessary.

bash reinstall.sh fails due to missing vsce and code

Errors I noticed:
reinstall.sh: line 18: vsce: command not found
reinstall.sh: line 22: code: command not found

Steps taken to fix:
brew install vsce
Open VS Code, shift+cmd+p, "Shell command: install 'code' command in PATH"
re-run bash reinstall.sh

Support remote LSPs

The idea is that you should be able to connect to a Rift server that is running on a different (trusted) computer. This should all work, just need to make the connection string configurable.

[Feature Request] Add support for debugger information

It would be cool to be able to pass the debugger state and other related debugging information to an AI language model in Rift. This would enable us to build debugging assistants on Rift ๐Ÿค“

P.S. Please let me know if there is any other information or specifics that are needed for an issue -- did not notice any issue templates for rift.

Can't get VSCode extension working...

I tried installing via the extensions menu, and then again from source.

With either, I end up at the same error message:

unexpected error: Command failed: /Users/scott/.morph/env/bin/rift
Traceback (most recent call last):
File "/Users/scott/.morph/env/bin/rift", line 5, in <module> from rift.server.core import main 
File "/Users/scott/.morph/rift/rift-engine/rift/server/core.py", line 12, in <module> from rift.llm.gpt4all_model import Gpt4AllModel, Gpt4AllSettings 
File "/Users/scott/.morph/rift/rift-engine/rift/llm/gpt4all_model.py", line 9, in <module> from gpt4all import GPT4All 
File "/Users/scott/.morph/env/lib/python3.10/site-packages/gpt4all/__init__.py", line 1, in <module> from .pyllmodel import LLModel # noqa File "/Users/scott/.morph/env/lib/python3.10/site-packages/gpt4all/pyllmodel.py", line 50, in <module> llmodel = load_llmodel_library() 
File "/Users/scott/.morph/env/lib/python3.10/site-packages/gpt4all/pyllmodel.py", line 46, in load_llmodel_library llmodel_lib = ctypes.CDLL(llmodel_dir) 
File "/usr/local/Cellar/[email protected]/3.10.12_1/Frameworks/Python.fr...

(apologies for terrible formatting and the trunction, I'm copying and pasting from the VSCode pop-up).

Add a Python linter to CI

We should decide on a standard Python linter / formatter to enforce code style and best practices.

Rather than running these in a pre-commit hook I think we should minimize friction for contributors by having these run upon creation of a PR + add contribution instructions to run the linter and formatter locally before creating a PR.

PydanticImportError

Issue: I am unable to run rift.server.core see the error below:

Windows 10 OS.

Python 3 installed.

I ran the following in a Powershell 5 terminal:

PS C:\Users\REDACTED\source\repos\rift> python -m venv rift_env

PS C:\Users\REDACTED\source\repos\rift> .\rift_env\Scripts\Activate.ps1

(rift_env) PS C:\Users\REDACTED\source\repos\rift> pip install -e .\rift-engine

(rift_env) PS C:\Users\REDACTED\source\repos\rift> python --version
Python 3.11.3

(rift_env) PS C:\Users\REDACTED\source\repos\rift> python -m rift.server.core --port 7797
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\server\core.py", line 5, in <module>
    from rift.server.lsp import LspServer
  File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\server\lsp.py", line 9, in <module>
    from rift.llm.abstract import (
  File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\llm\__init__.py", line 1, in <module>
    from .openai_client import OpenAIClient
  File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\llm\openai_client.py", line 20, in <module>
    from pydantic import BaseModel, BaseSettings, SecretStr
  File "C:\Users\REDACTED\source\repos\rift\rift_env\Lib\site-packages\pydantic\__init__.py", line 206, in __getattr__
    return _getattr_migration(attr_name)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\REDACTED\source\repos\rift\rift_env\Lib\site-packages\pydantic\_migration.py", line 279, in wrapper
    raise PydanticImportError(
pydantic.errors.PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package. See https://docs.pydantic.dev/2.0/migration/#basesettings-has-moved-to-pydantic-settings for more details.

For further information visit https://errors.pydantic.dev/2.0/u/import-error

Expected: it works.
Actual: error.

Supporting C#

Hi,
I'm trying to use rift with gpt4all model with questions on C#.
Example: I've this code
string[] filePaths = Directory.GetFiles(@"c:\MyDir\");
And I'm ask rift to convert it to .net core.
I'm not getting any meaningful results.

Do you've any hint on how to tune it to C# and .net core?

thank you

Error while using Rift (OpenAI key is missing)

Hi,
I installed rift engine on my local machine.
Running the extension, got the following error in the server

<LspServer 1> transport closed gracefully: end of stream                                                                                jsonrpc.py:541
           INFO     exiting serve_forever loop                                                                                                              jsonrpc.py:569
           INFO     <LspServer 1> entered shutdown state                                                                                                    jsonrpc.py:584
           INFO     initializing LSP server <LspServer 2>                                                                                                     server.py:47
           INFO     client initialized.                                                                                                                       server.py:55
[18:46:12] INFO     <LspServer 2> recieved model config chatModel='openai:gpt-3.5-turbo' completionsModel='openai:gpt-3.5-turbo' openaiKey=None                 lsp.py:249
           ERROR    <LspServer 2> request morph/run_chat:1 unhandled ValidationError:                                                                       jsonrpc.py:648
                    1 validation error for OpenAIClient   

Full error: https://pastebin.com/HmzBTuMW

Why it's looking for OpenAI key? I thought this is local LLM model that deal with Copilot without the need for OpenAI

Implement a Sublime Text plugin

Hi

I started working at implementing a Sublime Text plugin.
I did some research and I have unfortunately realised it's a bigger job than I have capacity for at this time.
However I did spend some time on it so I'll share my notes here. I am hoping they will be helpful.

A Sublime Plugin for Rift should leverage the existing LSP plugin architecture provided by the standard LSP plugin.
I managed to have this plugin start a local instance of the Rift server with the inlined LSP-sublime-settings file below

  • Assuming the Rift server binary can be published on Pypi (there is an open issue so it will happen), PipClientHandler, a class from LSP utils can be used to install and manage the server. This is documented here and an example of its use can be found here

  • The current server local installation (i.e. without Pypi) can be supported but it would require using the class GenericClientHandler instead, and creating a new ServerResourceHandler class deriving from ServerResourceInterface to manage the server lifecycle (all these classes are parts of the lsp-utils repo above).

Once a reference to a running server can be established with one of the two methods above, it should be possible to leverage the LSP Session object to send requests to the server.
A similar plugin for Copilot can be found here, it has a lot of the code needed.

Hope it helps.
Thanks

Config file LSP.sublime-settings

{
  "log_debug": true,
  "clients": {
    "rift": {
      "enabled": true,
      "command": [
        "python3",
        "/Users/jeremylan/Development/git/rift/rift-engine/rift/server/core.py",
        "--port",
        "stdio"
      ],
      "selector": "source.python"
    }
  }
}

No pytorch error

Amazing project! I tried running it and running into this issue:

On Python 3.11.4. pip 22.3.1

image

No visual clue for streaming events

Currently using the VS Code plugin the only way I cab monitor what is going on with the server after a prompt is submitted is to check on the Server terminal window, i.e.
INFO Created chat stream, awaiting results.

This is not ideal for a user using VS Code, a visual clue (such as a spinner or the Morph icon changing colour) on the chat window would be useful.

Idea: agent to add missing types in a region/file.

AST support is coming: #75
When that is done, an agent can take a region, or an entire file, and suggest edits to add missing types.

This is one example of combining computation (to find what types are missing) and prediction (from llm, to suggest the type).

Aider version?

Hi,

I've been using Aider a lot just via my terminal/cli. I just installed Rift to check it out but it seems that it's using an old version of Aider (v0.91)? Is there any way to update the version of Aider? And for that matter is it possible to update all of the programs it uses (GPT Engineer, Smol Dev, etc.)? I like the idea of Rift but if it can't access the latest version of the programs then I'd rather just use Aider/GPT Engineer natively.

Cheers,

Read workspace into memory from the `workspaceFolders` passed into `/initialize`

The base LSP server's handler for /initialize is currently a no-op:

    @rpc_method("initialize")
    async def on_initialize(self, params: InitializeParams) -> InitializeResult:
        # [todo] inject lsp capabilities here.
        logger.info(f"initializing LSP server {self.name}")
        return InitializeResult(
            serverInfo=PeerInfo(name=self.name, version=None),
            capabilities=self.capabilities,
        )

It should, if workspace folders / a root dir are specified in the params, load files (subject to reasonable constraints, e.g. excluding large binary blobs, or only including those tracked by git) into self.documents so that the server starts with an in-memory view of the workspace which is updated by didChange and didOpen events from the client.

add ability for LspServer instances to talk to each other.

Currently each instance of LspServer is unaware of other RPC servers connected to the process, so you can't have more advanced behaviour where some different connection on a different process is issuing workspaceEdit commands etc.

So this issue outlines a plan to make the rift-engine support this

When you start an instance of rift-engine, the entry point is a class called CodeCapabilitiesServer, which is responsible for taking incoming TCP connections and creating an instance of LspServer which then handles RPC on that connection.

class CodeCapabilitiesServer:
  servers : dict[str, LspServer]
  def run_server(self):
    server = LspServer(self)
    self.servers[server.id] = server
    try:
       ...
    finally:
      # dead connections should be removed.
      del self.servers[server.id]

Then inside each LspServer, we can access all of the other connections with self.parent.servers.items() and issue commands on other connections.

Error while finding module specification for 'rift.server.core' (ModuleNotFoundError: No module named 'rift')

Error comes after completing pip install -e ./rift-engine

Rift is already installed:

Name: rift
Version: 0.0.3
Summary:
Home-page:
Author:
Author-email: Morph Labs <[email protected]>
License:
Location: /usr/local/lib/python3.11/site-packages
Editable project location: /Users/agrim/Downloads/rift/rift-engine
Requires: aiohttp, fire, gpt4all, miniscutil, pydantic, rich, sentencepiece, tiktoken, torch, transformers

edit: nvm i'm a moron. make sure you have pip/python versions checked/run a venv

Error when running server in windows with Python 3.10.11

Hello, first of all this is really great, congratulations.
I was exited and want to try it in my machine, however i got error when running

 python -m rift.server.core --port 7797

the error message:

$ python -m rift.server.core --port 7797
Traceback (most recent call last):
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "E:\rift\rift-engine\rift\server\core.py", line 5, in <module>
    from rift.server.lsp import LspServer
  File "E:\rift\rift-engine\rift\server\lsp.py", line 5, in <module>
    from miniscutil.lsp import LspServer as BaseLspServer, rpc_method
  File "C:\Users\prifalab\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\miniscutil\__init__.py", line 7, in <module>
    from .misc import (
  File "C:\Users\prifalab\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\miniscutil\misc.py", line 25, in <module>
    from typing_extensions import deprecated
ImportError: cannot import name 'deprecated' from 'typing_extensions' (C:\Users\prifalab\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\typing_extensions.py)

Do i miss something?
thank you

Rift Agents: add support for `automata`

Hey all,

I've been following the work you are doing at Rift and it seems very interesting to me. I've been working on building coding agents at this repository here, where I am specifically interested in a bottom-up approach to code automation.

I would be happy to help with the integration and testing. This would involve:

  1. Creating rift-engine/rift/agents/automata.py
  2. Implementing a Automata instance of Agent
  3. Implementing the necessary logic in the Automata Agent classes so that batches of file_diff.FileChanges can be emitted easily. This is the part where collaboration would be most useful :)

Thanks!

validation error for OpenAIClient api_key

Discussion

The error mentions an API key - does this project depend on Open API? I thought the whole point was to have an offline AI model so as not to be spied upon.

Tooling

image

Steps

  • Open a new text file in VSCodium (open source VS-Code)
  • Enter the prompt: implement fizz-buzz in Python 3..
  • Run the latest master Rift source code with python -m rift.server.core --port 7797:
    [12:16:58] INFO     starting rift server on 7797                                                                                core.py:150
           INFO     listening with LSP protool on ('127.0.0.1', 7797)                                                            core.py:95
[12:27:43] INFO     <LspServer 1> transport closed gracefully: end of stream                                                 jsonrpc.py:541
           INFO     exiting serve_forever loop                                                                               jsonrpc.py:569
           INFO     <LspServer 1> entered shutdown state                                                                     jsonrpc.py:584
           INFO     initializing LSP server <LspServer 2>                                                                      server.py:47
           INFO     client initialized.                                                                                        server.py:55
[12:27:53] INFO     <LspServer 2> recieved model config chatModel='openai:gpt-3.5-turbo' completionsModel='openai:gpt-3.5-turbo' lsp.py:249
                    openaiKey=None
           ERROR    <LspServer 2> request morph/run_chat:1 unhandled ValidationError:                                        jsonrpc.py:648
                    1 validation error for OpenAIClient
                    api_key
                      field required (type=value_error.missing)
                    This is likely caused by a bug in the morph/run_chat method handler.
                    Traceback (most recent call last):
                      File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\rpc\jsonrpc.py", line 633, in
                    _on_request
                        result = await self._on_request_core(req)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                      File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\rpc\jsonrpc.py", line 715, in
                    _on_request_core
                        result = await self.dispatcher.dispatch(req.method, params)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                      File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\rpc\jsonrpc.py", line 246, in
                    dispatch
                        result = await result
                                 ^^^^^^^^^^^^
                      File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\server\lsp.py", line 339, in
                    on_run_chat
                        chat = await self.ensure_chat_model()
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                      File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\server\lsp.py", line 315, in
                    ensure_chat_model
                        await self.get_config()
                      File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\server\lsp.py", line 252, in
                    get_config
                        self.completions_model = config.create_completions()
                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                      File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\llm\create.py", line 29, in
                    create_completions
                        return create_client(self.completionsModel, self.openaiKey)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                      File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\llm\create.py", line 56, in
                    create_client
                        client = create_client_core(config, openai_api_key)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                      File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\llm\create.py", line 88, in
                    create_client_core
                        return OpenAIClient.parse_obj(kwargs)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                      File "pydantic\main.py", line 526, in pydantic.main.BaseModel.parse_obj
                      File "pydantic\env_settings.py", line 40, in pydantic.env_settings.BaseSettings.__init__
                      File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
                    pydantic.error_wrappers.ValidationError: 1 validation error for OpenAIClient
                    api_key
                      field required (type=value_error.missing)

Jetbrains IDE support

Would love to try this project once it supports Jetbrains IDEs (such as IntelliJ,PyCharm, WebStorm, etc.)

Installation instructions for non-Python users

To enhance accessibility and user-friendliness, a specific and straightforward set of installation instructions for both Python and rift could be added to the project documentation. This would ease the onboarding process for newcomers unfamiliar with Python and interested in using rift on other languages.

Is that possible to use openai api?

I see you wrote Openai API in the features. Can we use it on the Vscode?
My laptop is still struggling to run large language models locally, that will be a lot easier to use the API..

Rift Agents: add support for `gpt-engineer`

We should add support for gpt-engineer in rift.agents. This would involve:

  1. creating rift-engine/rift/agents/gpt_engineer.py
  2. implementing a GPTEngineer instance of Agent
  3. Refactoring the main running logic of Step objects in the gpt-engineer library so that we can emit batches of file_diff.FileChanges in the implementation of Agent.run().

VSCE issue

I am trying to build the VS Code extension with the following guide but got stuck on this command in Powershell 5:

PS C:\Users\REDACTED\source\repos\rift> vsce package .\editors\rift-vscode\
 ERROR  Extension manifest not found: C:\Users\REDACTED\source\repos\rift\package.json

PS C:\Users\REDACTED\source\repos\rift> cd .\editors\rift-vscode\
PS C:\Users\REDACTED\source\repos\rift> vsce package .
Executing prepublish script 'npm run vscode:prepublish'...

> rift-vscode@0.0.8 vscode:prepublish
> npm run compile

> rift-vscode@0.0.8 compile
> node ./build.mjs

  out\main.js      828.0kb
  out\main.js.map    1.5mb

โšก Done in 62ms
 ERROR  Invalid version .

Searching online I found out that VS Code extensions need "engines.vscode" specified in the package.json:

{
  ...
+  "engines": {
+    "vscode": "^1.7.5"
+  }
 ...
}

I re-ran the command above but still no cigar.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.