morph-labs / rift Goto Github PK
View Code? Open in Web Editor NEWRift: an AI-native language server for your personal AI software engineer
Home Page: https://morph.so
License: Apache License 2.0
Rift: an AI-native language server for your personal AI software engineer
Home Page: https://morph.so
License: Apache License 2.0
Hi there! Having trouble running the model, I asked for a code completion, and this error occurred, wondering if anyone has any insight into how I should go about fixing it?
Running this on Macbook Pro M2, 32GB RAM, 1TB Storage.
Thank you all!
When a cancellation event occurs, the underlying streaming process associated with TextStream and GPT4ALL Model continues to run and consume CPU resources until the model has finished. This behavior is undesirable, as it leads to unnecessary resource utilization, and can negatively impact system performance. The streaming process should halt its operation immediately upon receiving a cancellation event to free up system resources.
[written by AI of course]
Currently, both run_chat
and run_helper
error out if the current code window does not fit into context. There should be a generic truncation function/object which accepts the vscode document + cursor position metadata processed by the client in Rift and narrows the context to a window of N
tokens around the center of the current window.
This will solve #17 and related issues reported in #rift-support.
makes Rift available for download on Code OSS
It seems like the server uses a custom morph/set_model_config
request for sending configuration from the editor.
Maybe I'm missing something after skimming through the code but I wonder why it's using a custom request and not a standard workspace/didChangeConfiguration and workspace/configuration functionality?
Using a standard LSP functionality would work in all LSP-conforming client implementations while custom requests need some extra code.
I am running on Windows 11 but do all my coding in the WSL and VSCode makes it a flawless experience, completely transparent.
However, when I open the Rift sidebar, I immediately get this error that the extension could not find the rift
executable.
When I then open the terminal and run rift
manually from there (which means the executable is clearly in the PATH, but inside the WSL, not in the Windows host system), it suddenly works and all the rift gizmos come to live.
Not a huge deal since it does work after running it manually, just a small annoyance that seems unnecessary.
It would help some editors to support this engine through LSP if the engine would be published on pypi.
Errors I noticed:
reinstall.sh: line 18: vsce: command not found
reinstall.sh: line 22: code: command not found
Steps taken to fix:
brew install vsce
Open VS Code, shift+cmd+p, "Shell command: install 'code' command in PATH"
re-run bash reinstall.sh
The idea is that you should be able to connect to a Rift server that is running on a different (trusted) computer. This should all work, just need to make the connection string configurable.
It would be cool to be able to pass the debugger state and other related debugging information to an AI language model in Rift. This would enable us to build debugging assistants on Rift ๐ค
P.S. Please let me know if there is any other information or specifics that are needed for an issue -- did not notice any issue templates for rift.
I tried installing via the extensions menu, and then again from source.
With either, I end up at the same error message:
unexpected error: Command failed: /Users/scott/.morph/env/bin/rift
Traceback (most recent call last):
File "/Users/scott/.morph/env/bin/rift", line 5, in <module> from rift.server.core import main
File "/Users/scott/.morph/rift/rift-engine/rift/server/core.py", line 12, in <module> from rift.llm.gpt4all_model import Gpt4AllModel, Gpt4AllSettings
File "/Users/scott/.morph/rift/rift-engine/rift/llm/gpt4all_model.py", line 9, in <module> from gpt4all import GPT4All
File "/Users/scott/.morph/env/lib/python3.10/site-packages/gpt4all/__init__.py", line 1, in <module> from .pyllmodel import LLModel # noqa File "/Users/scott/.morph/env/lib/python3.10/site-packages/gpt4all/pyllmodel.py", line 50, in <module> llmodel = load_llmodel_library()
File "/Users/scott/.morph/env/lib/python3.10/site-packages/gpt4all/pyllmodel.py", line 46, in load_llmodel_library llmodel_lib = ctypes.CDLL(llmodel_dir)
File "/usr/local/Cellar/[email protected]/3.10.12_1/Frameworks/Python.fr...
(apologies for terrible formatting and the trunction, I'm copying and pasting from the VSCode pop-up).
If a user connects to 7797 over http then it should return a small html payload with a link to a getting started guide.
We should decide on a standard Python linter / formatter to enforce code style and best practices.
Rather than running these in a pre-commit hook I think we should minimize friction for contributors by having these run upon creation of a PR + add contribution instructions to run the linter and formatter locally before creating a PR.
Issue: I am unable to run rift.server.core
see the error below:
Windows 10 OS.
Python 3 installed.
I ran the following in a Powershell 5 terminal:
PS C:\Users\REDACTED\source\repos\rift> python -m venv rift_env
PS C:\Users\REDACTED\source\repos\rift> .\rift_env\Scripts\Activate.ps1
(rift_env) PS C:\Users\REDACTED\source\repos\rift> pip install -e .\rift-engine
(rift_env) PS C:\Users\REDACTED\source\repos\rift> python --version
Python 3.11.3
(rift_env) PS C:\Users\REDACTED\source\repos\rift> python -m rift.server.core --port 7797
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\server\core.py", line 5, in <module>
from rift.server.lsp import LspServer
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\server\lsp.py", line 9, in <module>
from rift.llm.abstract import (
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\llm\__init__.py", line 1, in <module>
from .openai_client import OpenAIClient
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\llm\openai_client.py", line 20, in <module>
from pydantic import BaseModel, BaseSettings, SecretStr
File "C:\Users\REDACTED\source\repos\rift\rift_env\Lib\site-packages\pydantic\__init__.py", line 206, in __getattr__
return _getattr_migration(attr_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\REDACTED\source\repos\rift\rift_env\Lib\site-packages\pydantic\_migration.py", line 279, in wrapper
raise PydanticImportError(
pydantic.errors.PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package. See https://docs.pydantic.dev/2.0/migration/#basesettings-has-moved-to-pydantic-settings for more details.
For further information visit https://errors.pydantic.dev/2.0/u/import-error
Expected: it works.
Actual: error.
Rift chat context can easily max out when the files are long --- we should ensure that the method that constructs the system message gracefully handles length errors like this. (Discord message).
Hi,
I'm trying to use rift with gpt4all model with questions on C#.
Example: I've this code
string[] filePaths = Directory.GetFiles(@"c:\MyDir\");
And I'm ask rift to convert it to .net core.
I'm not getting any meaningful results.
Do you've any hint on how to tune it to C# and .net core?
thank you
Hi,
I installed rift engine on my local machine.
Running the extension, got the following error in the server
<LspServer 1> transport closed gracefully: end of stream jsonrpc.py:541
INFO exiting serve_forever loop jsonrpc.py:569
INFO <LspServer 1> entered shutdown state jsonrpc.py:584
INFO initializing LSP server <LspServer 2> server.py:47
INFO client initialized. server.py:55
[18:46:12] INFO <LspServer 2> recieved model config chatModel='openai:gpt-3.5-turbo' completionsModel='openai:gpt-3.5-turbo' openaiKey=None lsp.py:249
ERROR <LspServer 2> request morph/run_chat:1 unhandled ValidationError: jsonrpc.py:648
1 validation error for OpenAIClient
Full error: https://pastebin.com/HmzBTuMW
Why it's looking for OpenAI key? I thought this is local LLM model that deal with Copilot without the need for OpenAI
Hi
I started working at implementing a Sublime Text plugin.
I did some research and I have unfortunately realised it's a bigger job than I have capacity for at this time.
However I did spend some time on it so I'll share my notes here. I am hoping they will be helpful.
A Sublime Plugin for Rift should leverage the existing LSP plugin architecture provided by the standard LSP plugin.
I managed to have this plugin start a local instance of the Rift server with the inlined LSP-sublime-settings
file below
Assuming the Rift server binary can be published on Pypi
(there is an open issue so it will happen), PipClientHandler
, a class from LSP utils can be used to install and manage the server. This is documented here and an example of its use can be found here
The current server local installation (i.e. without Pypi
) can be supported but it would require using the class GenericClientHandler
instead, and creating a new ServerResourceHandler
class deriving from ServerResourceInterface
to manage the server lifecycle (all these classes are parts of the lsp-utils
repo above).
Once a reference to a running server can be established with one of the two methods above, it should be possible to leverage the LSP Session
object to send requests to the server.
A similar plugin for Copilot can be found here, it has a lot of the code needed.
Hope it helps.
Thanks
Config file LSP.sublime-settings
{
"log_debug": true,
"clients": {
"rift": {
"enabled": true,
"command": [
"python3",
"/Users/jeremylan/Development/git/rift/rift-engine/rift/server/core.py",
"--port",
"stdio"
],
"selector": "source.python"
}
}
}
Attempting to start the server from the Sublime Text LSP plugin results in the error No implementation of ofdict for Literal
.
https://discord.com/channels/1117623339456933940/1122338689067012216/1122338710143381605
For VSCode this would e.g. return a tree of URIs (maybe filtered by whether or not they're tracked by Git) visible from the editor's current workspace as well as open editor windows (for untracked files).
That is, equip the server with an
@rpc_request
def request_workspace_info(self, ...) -> ...:
...
LocalAI replicate the OpenAI api, so we should support setting arbitrary OpenAI URLs in the model configuration settings.
python -V
Python 3.10.7
Python 3.10 or above is not found on your system. Please install it and try again. Ensure that python3.10 is available and try installing Rift manually: https://www.github.com/morph-labs/rift
Possibly client is using a different encoding?
See e.g. the setup here https://github.com/ocamllabs/vscode-ocaml-platform/blob/master/src/ocaml_lsp.ml#L166
While the target is clearly getting rift running locally, it would be useful to be able to connect to an Azure OpenAI instance.
Currently using the VS Code plugin the only way I cab monitor what is going on with the server after a prompt is submitted is to check on the Server terminal window, i.e.
INFO Created chat stream, awaiting results.
This is not ideal for a user using VS Code, a visual clue (such as a spinner or the Morph icon changing colour) on the chat window would be useful.
AST support is coming: #75
When that is done, an agent can take a region, or an entire file, and suggest edits to add missing types.
This is one example of combining computation (to find what types are missing) and prediction (from llm, to suggest the type).
Hi,
I've been using Aider a lot just via my terminal/cli. I just installed Rift to check it out but it seems that it's using an old version of Aider (v0.91)? Is there any way to update the version of Aider? And for that matter is it possible to update all of the programs it uses (GPT Engineer, Smol Dev, etc.)? I like the idea of Rift but if it can't access the latest version of the programs then I'd rather just use Aider/GPT Engineer natively.
Cheers,
The base LSP server's handler for /initialize
is currently a no-op:
@rpc_method("initialize")
async def on_initialize(self, params: InitializeParams) -> InitializeResult:
# [todo] inject lsp capabilities here.
logger.info(f"initializing LSP server {self.name}")
return InitializeResult(
serverInfo=PeerInfo(name=self.name, version=None),
capabilities=self.capabilities,
)
It should, if workspace folders / a root dir are specified in the params
, load files (subject to reasonable constraints, e.g. excluding large binary blobs, or only including those tracked by git) into self.documents
so that the server starts with an in-memory view of the workspace which is updated by didChange
and didOpen
events from the client.
Currently each instance of LspServer
is unaware of other RPC servers connected to the process, so you can't have more advanced behaviour where some different connection on a different process is issuing workspaceEdit commands etc.
So this issue outlines a plan to make the rift-engine support this
When you start an instance of rift-engine, the entry point is a class called CodeCapabilitiesServer
, which is responsible for taking incoming TCP connections and creating an instance of LspServer
which then handles RPC on that connection.
class CodeCapabilitiesServer:
servers : dict[str, LspServer]
def run_server(self):
server = LspServer(self)
self.servers[server.id] = server
try:
...
finally:
# dead connections should be removed.
del self.servers[server.id]
Then inside each LspServer
, we can access all of the other connections with self.parent.servers.items()
and issue commands on other connections.
Error comes after completing pip install -e ./rift-engine
Rift is already installed:
Name: rift
Version: 0.0.3
Summary:
Home-page:
Author:
Author-email: Morph Labs <[email protected]>
License:
Location: /usr/local/lib/python3.11/site-packages
Editable project location: /Users/agrim/Downloads/rift/rift-engine
Requires: aiohttp, fire, gpt4all, miniscutil, pydantic, rich, sentencepiece, tiktoken, torch, transformers
edit: nvm i'm a moron. make sure you have pip/python versions checked/run a venv
Hello, first of all this is really great, congratulations.
I was exited and want to try it in my machine, however i got error when running
python -m rift.server.core --port 7797
the error message:
$ python -m rift.server.core --port 7797
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "E:\rift\rift-engine\rift\server\core.py", line 5, in <module>
from rift.server.lsp import LspServer
File "E:\rift\rift-engine\rift\server\lsp.py", line 5, in <module>
from miniscutil.lsp import LspServer as BaseLspServer, rpc_method
File "C:\Users\prifalab\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\miniscutil\__init__.py", line 7, in <module>
from .misc import (
File "C:\Users\prifalab\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\miniscutil\misc.py", line 25, in <module>
from typing_extensions import deprecated
ImportError: cannot import name 'deprecated' from 'typing_extensions' (C:\Users\prifalab\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\typing_extensions.py)
Do i miss something?
thank you
Hey all,
I've been following the work you are doing at Rift and it seems very interesting to me. I've been working on building coding agents at this repository here, where I am specifically interested in a bottom-up approach to code automation.
I would be happy to help with the integration and testing. This would involve:
Thanks!
The error mentions an API key - does this project depend on Open API? I thought the whole point was to have an offline AI model so as not to be spied upon.
implement fizz-buzz in Python 3.
.master
Rift source code with python -m rift.server.core --port 7797
: [12:16:58] INFO starting rift server on 7797 core.py:150
INFO listening with LSP protool on ('127.0.0.1', 7797) core.py:95
[12:27:43] INFO <LspServer 1> transport closed gracefully: end of stream jsonrpc.py:541
INFO exiting serve_forever loop jsonrpc.py:569
INFO <LspServer 1> entered shutdown state jsonrpc.py:584
INFO initializing LSP server <LspServer 2> server.py:47
INFO client initialized. server.py:55
[12:27:53] INFO <LspServer 2> recieved model config chatModel='openai:gpt-3.5-turbo' completionsModel='openai:gpt-3.5-turbo' lsp.py:249
openaiKey=None
ERROR <LspServer 2> request morph/run_chat:1 unhandled ValidationError: jsonrpc.py:648
1 validation error for OpenAIClient
api_key
field required (type=value_error.missing)
This is likely caused by a bug in the morph/run_chat method handler.
Traceback (most recent call last):
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\rpc\jsonrpc.py", line 633, in
_on_request
result = await self._on_request_core(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\rpc\jsonrpc.py", line 715, in
_on_request_core
result = await self.dispatcher.dispatch(req.method, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\rpc\jsonrpc.py", line 246, in
dispatch
result = await result
^^^^^^^^^^^^
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\server\lsp.py", line 339, in
on_run_chat
chat = await self.ensure_chat_model()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\server\lsp.py", line 315, in
ensure_chat_model
await self.get_config()
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\server\lsp.py", line 252, in
get_config
self.completions_model = config.create_completions()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\llm\create.py", line 29, in
create_completions
return create_client(self.completionsModel, self.openaiKey)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\llm\create.py", line 56, in
create_client
client = create_client_core(config, openai_api_key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\REDACTED\source\repos\rift\rift-engine\rift\llm\create.py", line 88, in
create_client_core
return OpenAIClient.parse_obj(kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 526, in pydantic.main.BaseModel.parse_obj
File "pydantic\env_settings.py", line 40, in pydantic.env_settings.BaseSettings.__init__
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for OpenAIClient
api_key
field required (type=value_error.missing)
issue with miniscutil
Would love to try this project once it supports Jetbrains IDEs (such as IntelliJ,PyCharm, WebStorm, etc.)
Also update README with instructions for users to pip install pyrift
To enhance accessibility and user-friendliness, a specific and straightforward set of installation instructions for both Python and rift could be added to the project documentation. This would ease the onboarding process for newcomers unfamiliar with Python and interested in using rift on other languages.
Would love to join the community to discuss roadmap/contributions, the link is broken
I see you wrote Openai API in the features. Can we use it on the Vscode?
My laptop is still struggling to run large language models locally, that will be a lot easier to use the API..
We should add support for gpt-engineer
in rift.agents
. This would involve:
rift-engine/rift/agents/gpt_engineer.py
GPTEngineer
instance of Agent
Step
objects in the gpt-engineer
library so that we can emit batches of file_diff.FileChange
s in the implementation of Agent.run()
.e.g. if the main editor window is focused on the Settings page, the Rift server will initialize a client but fail silently
I am trying to build the VS Code extension with the following guide but got stuck on this command in Powershell 5:
PS C:\Users\REDACTED\source\repos\rift> vsce package .\editors\rift-vscode\
ERROR Extension manifest not found: C:\Users\REDACTED\source\repos\rift\package.json
PS C:\Users\REDACTED\source\repos\rift> cd .\editors\rift-vscode\
PS C:\Users\REDACTED\source\repos\rift> vsce package .
Executing prepublish script 'npm run vscode:prepublish'...
> rift-vscode@0.0.8 vscode:prepublish
> npm run compile
> rift-vscode@0.0.8 compile
> node ./build.mjs
out\main.js 828.0kb
out\main.js.map 1.5mb
โก Done in 62ms
ERROR Invalid version .
Searching online I found out that VS Code extensions need "engines.vscode" specified in the package.json
:
{
...
+ "engines": {
+ "vscode": "^1.7.5"
+ }
...
}
I re-ran the command above but still no cigar.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.