Git Product home page Git Product logo

tower-lsp's Introduction

tower-lsp

Build Status Crates.io Documentation

Language Server Protocol implementation for Rust based on Tower.

Tower is a simple and composable framework for implementing asynchronous services in Rust. Central to Tower is the Service trait, which provides the necessary abstractions for defining request/response clients and servers. Examples of protocols implemented using the Service trait include hyper for HTTP and tonic for gRPC.

This library (tower-lsp) provides a simple implementation of the Language Server Protocol (LSP) that makes it easy to write your own language server. It consists of three parts:

  • The LanguageServer trait which defines the behavior of your language server.
  • The asynchronous LspService delegate which wraps your language server implementation and defines the behavior of the protocol.
  • A Server which spawns the LspService and processes requests and responses over stdio or TCP.

Example

use tower_lsp::jsonrpc::Result;
use tower_lsp::lsp_types::*;
use tower_lsp::{Client, LanguageServer, LspService, Server};

#[derive(Debug)]
struct Backend {
    client: Client,
}

#[tower_lsp::async_trait]
impl LanguageServer for Backend {
    async fn initialize(&self, _: InitializeParams) -> Result<InitializeResult> {
        Ok(InitializeResult::default())
    }

    async fn initialized(&self, _: InitializedParams) {
        self.client
            .log_message(MessageType::INFO, "server initialized!")
            .await;
    }

    async fn shutdown(&self) -> Result<()> {
        Ok(())
    }
}

#[tokio::main]
async fn main() {
    let stdin = tokio::io::stdin();
    let stdout = tokio::io::stdout();

    let (service, socket) = LspService::new(|client| Backend { client });
    Server::new(stdin, stdout, socket).serve(service).await;
}

Using runtimes other than tokio

By default, tower-lsp is configured for use with tokio.

Using tower-lsp with other runtimes requires disabling default-features and enabling the runtime-agnostic feature:

[dependencies.tower-lsp]
version = "*"
default-features = false
features = ["runtime-agnostic"]

Using proposed features

You can use enable proposed features in the LSP Specification version 3.18 by enabling the proposed Cargo crate feature. Note that there are no semver guarantees to the proposed features so there may be breaking changes between any type of version in the proposed features.

Ecosystem

  • tower-lsp-boilerplate - Useful GitHub project template which makes writing new language servers easier.

License

tower-lsp is free and open source software distributed under the terms of either the MIT or the Apache 2.0 license, at your option.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

tower-lsp's People

Contributors

agluszak avatar ahlinc avatar attila-lin avatar brettcannon avatar dalance avatar darinmorrison avatar dependabot-preview[bot] avatar dependabot[bot] avatar ebkalderon avatar icsaszar avatar iwanabethatguy avatar jmcphers avatar lucacasonato avatar nanne007 avatar ralpha avatar sdankel avatar seanyoung avatar silvanshade avatar sno2 avatar strum355 avatar tsukimizake avatar workflow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tower-lsp's Issues

Question regarding server distribution

Hey, I'm considering to rewrite a language server in Rust and plan to use tower-lsp for this.

My biggest question right now is how to distribute the server with the client, for example as VS Code extension. Do I need to compile to WASM to make it platform agnostic? Or do I have to compile several versions and choose the server binary dynamically?

So far I found #317 which provides a sample implementation, but I'm still a bit confused on how the distribution works.

Many thanks in advance!

Consider supporting proposed semanticHighlight notification

The textDocument/semanticHighlight server-to-client notification is a proposed addition to the Language Server Protocol which would allow servers to perform syntax highlighting of open text documents (upstream issue, prototype JS implementation, relevant lsp-types PR).

Since this is a proposed protocol feature currently not in the specification yet, we might need to gate it with a "proposed" feature flag, similar to what lsp-types does for the feature currently.

This feature is fairly low priority, possibly below #145, but would nonetheless be nice to have.

Reduce number of required methods in LanguageServer trait

I think that as the number of implemented methods grows it will be a lot of boilerplate to write for the users of the library to implement a simple server.

The simplest solution would probably be to require only the methods are declared (as a capability) when ServerCapabilities::default() is used.

Another approach could be to split up LanguageServer in the categories outlined in the specification like WindowHandler, TelemetryHandler, WorkspaceHandler, etc. leaving only initialize and initialized in LanguageServer. These (the new traits) shouldn't require implementing all methods either.

I think this is a great project and of all the other options I've considered for a language server it has the simplest and most ergonomic API.

Fix server support for $/cancelRequest

It appears that the RequestKind::CancelRequest struct variant is defined incorrectly, meaning client-to-server $/cancelRequest notifications are silently ignored. This seems to have been broken for a long time, apparently since #202.

I discovered this by intentionally adding a 1500ms async delay to the LanguageServer::did_change method in examples/stdio.rs. When this method is invoked on the server, the unexpected delay causes the client to emit a $/cancelRequest notification to the server. After opening a new text buffer and starting to type, I successfully confirmed the $/cancelRequest notification was indeed being emitted by the client, but the server wasn't showing any successful request cancellation logs.

@silvanshade Friendly heads-up: this issue was apparently present in tower-lsp for a while and it seems to affect lspower too. I've added a unit test that ensures that this doesn't break again in the future.

Server doesn't exit on receiving exit notification

On receiving an exit notification, the server does not return until the next well-formed message is received, or the input is closed.

This might not affect much for regular usage on stdin, but likely will for a TCP listener. It also causes the tests I am writing using lsp-test to hang.

Sample session. Click to expand
stdout>  INFO language server initialized
stdout> Content-Length: 86
stdout> 
stdout> {"jsonrpc":"2.0","result":{"capabilities":{"documentFormattingProvider":true}},"id":1}

 stdin> Content-Length: 46^M
 stdin> ^M
 stdin> {
 stdin>     "jsonrpc": "2.0",
 stdin>     "method": "exit"
 stdin> }

stdout>  INFO exit notification received, stopping

 stdin> Content-Length: 0^M
 stdin> ^M

stdout> ERROR failed to decode message: unable to parse JSON body: EOF while parsing a value at line 1 column 0
stdout> Content-Length: 75
stdout> 
stdout> {"jsonrpc":"2.0","error":{"code":-32700,"message":"Parse error"},"id":null}

 stdin> Content-Length: 2^M
 stdin> ^M
 stdin> {}

stdout> ERROR language server has exited
stdout>  INFO Successfully completed

Implement support for $/cancelRequest (for client requests)

Support for canceling server requests (from the client to the server) was implemented when #145 was resolved.

However, the LSP spec for $/cancelRequest allows for server request (from the server to the client) to also be canceled.

This would especially be useful with send_custom_request per #230.

I'm not sure exactly how this would be implemented but I suppose the ClientRequests struct would need to be modified to work with future::AbortHandle, similar to ServerRequests, and a corresponding cancel method added to the impl ClientRequests.

As far as actually canceling requests goes, it would probably be good to have an interface similar to how microsoft/vscode-languageserver-node does it (see here), to where you can create a "cancellation token" prior to sending the request, and then that token would become an argument to send_request (and its derivatives).

The actual cancelation call would then be token.cancel(), which would cause the response to resolve as a RequestCancelled error (after the client properly responds to $/cancelRequest).

Does that seem feasible?

Add some info for testing(in general) the extensions built on tower-lsp

Hi there, at hyperledger we are developing vscode server for solidity language and tower-lsp has proved as a great server implementation to built our server on. Thanks for writing this server and for keeping it up to date.
So I was working on my version of the server and finished implementing the hover feature for same. Now my server is in Rust(tower-lsp) and the client is in Js. I searched on the vscode API but I couldn't find any way to invoke hover while running tests(like running test from client side and getting the hover message in the test from the server using some function).
I implemented diagnostics as well but at that time vscode API had getdiagnostics function which did the trick. This time I can't find any such method for hover.
Any suggestions on how should the hover tests be implemented?
my repo: https://github.com/hyperledger-labs/solang-vscode

Implement Service for Client

It seems like a design oversight that the Client type does not implement the tower_service::Service trait. Implementing this trait would enable compatibility with the tower middleware, and hopefully simplify the request dispatching logic.

Unlike LspService as described in issue #177, Client should be perfectly capable of having a unary request/response model. In this case, the Request type parameter would be ClientRequest and the associated types would be as follows:

type Response = Option<Response>; // Response is `None` if the request was a notification.
type Error = crate::jsonrpc::Error;
type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;

It would be great if it were possible to implement Service generically, specializing over R: lsp_types::Request and N: lsp_types::Notification and accepting Params from both as input, but this is impossible to express in the Rust type system, as far as I can tell. I think the above approach is good enough solution for our use case.

We should think about whether ClientRequest should be redesigned to not include the id field for requests, in favor of having the Client manage it internally instead, using the request_id field. This prevents callers from submitting the same request ID twice, preventing accidental collisions and improving type safety.

Consider ditching concurrent handler execution

Background

In order for the framework to support both standard streams and TCP transports generically, we need a way to interleave both client-to-server and server-to-client communication over a single stream. This topic has been the subject of a previous issue. The current strategy used by tower-lsp (and lspower in turn) to accomplish this is somewhat simplistic:

  1. Incoming messages (requests to server, responses from client) are read sequentially from the input stream
  2. Each message is routed to its respective async handler
  3. Pending tasks are buffered and executed concurrently on a single thread, maximum four (4) tasks at a time, preserving order
  4. Outgoing messages (responses from server, requests from client) are serialized into the output stream

The third step above is particularly significant, however, and has some unintended consequences.

Problem Summary

A closer reading of the "Request, Notification and Response Ordering" section of the Language Server Protocol specification reveals:

Responses to requests should be sent in roughly the same order as the requests appear on the server or client side. [...] However, the server may decide to use a parallel execution strategy and may wish to return responses in a different order than the requests were received. The server may do so as long as this reordering doesn’t affect the correctness of the responses.

This is concerning to me because tower-lsp unconditionally executes pending async tasks concurrently without any regard for the correctness of the execution. The correct ordering of the outgoing messages is preserved, as per the spec, but the execution order of the handlers is not guaranteed to be correct. For example, an innocent call to self.client.log_message().await inside one server handler might potentially prompt the executor to not immediately return back to that handler's yield point, but instead start processing the next incoming request concurrently as it becomes available.

As evidenced by downstream GitHub issues like denoland/deno#10437, such behavior can potentially cause the state of the server and client to slowly drift apart as many small and subtle errors accumulate and compound over time. This problem is exacerbated by LSP's frequent use of JSON-RPC notifications rather than requests, which don't require waiting for a response from the server (see relevant discussion in microsoft/language-server-protocol#584 and microsoft/language-server-protocol#666).

It's not really possible to confidently determine whether any particular "state drift" bug was caused by the server implementation of tower-lsp without stepping through with a debugger. However, there are some things we can do to improve the situation for our users and make such bugs less likely.

Possible solutions

For example, we could process client-to-server and server-to-client messages concurrently to each other, but individual messages of each type must execute in the order they are received, one by one, and no concurrency between individual LanguageServer handlers would be allowed.

This would greatly decrease request throughput, however. Perhaps it would be beneficial to potentially allow some handlers to run concurrently where it's safe (with user opt-in or opt-out), but otherwise default to fully sequential execution. To quote the "Request, Notification and Response Ordering" section of the Language Server Protocol specification again:

For example, reordering the result of textDocument/completion and textDocument/signatureHelp is allowed, as these each of these requests usually won’t affect the output of the other. On the other hand, the server most likely should not reorder textDocument/definition and textDocument/rename requests, since the executing the latter may affect the result of the former.

If choose to go this route, we may have to determine ahead of time which handlers are usually safe to reorder and interleave and which are not. I imagine this would introduce a ton of additional complexity to tower-lsp, though, so I'd rather leave such responsibilities to the server author to implement themselves, if possible.

Below are a few key takeaways that we can start adding now to improve the situation:

  1. Execute server request/notification handlers sequentially in the order in which they were received, no concurrency.
  2. Process client-to-server and server-to-client messages concurrently with each other.

The matters of optional user-controlled concurrency are still unanswered, however, and will likely require further design work.

Any thoughts on the matter, @silvanshade?

Add code to handle incremental text updates

Nice project! I can't use it yet because I really only need go to type definition, but I'm keeping an eye on this.

Anyway, it would be very good if you could add code to the crate that (optionally) lets users have the incremental text updates handled for them. It's quite a pain to get right yourself because everything is done by line and row indices and they are measured in UTF-16 code points even though the actual text is sent as UTF-8! Yes, it is ridiculous.

Add support for remaining textDocument requests on server

error[E0308]: mismatched types in lib.rs:108:1

I don't know anything about rust, or this project. I am using cargo to install another binary which has tower-lsp v0.14.1 as a dependency and I'm seeing this error:

error[E0308]: mismatched types
   --> /home/bphunter/.cargo/registry/src/github.com-1ecc6299db9ec823/tower-lsp-0.14.1/src/lib.rs:108:1
    |
108 | #[rpc]
    | ^^^^^^ expected enum `std::option::Option`, found enum `Id`
    |
    = note: expected enum `std::option::Option<Id>`
               found enum `Id`
    = note: this error originates in the attribute macro `rpc` (in Nightly builds, run with -Z macro-backtrace for more info)
help: try wrapping the expression in `jsonrpc::_::_serde::__private::Some`
    |
108 | jsonrpc::_::_serde::__private::Some(#[rpc])
    | ++++++++++++++++++++++++++++++++++++      +

For more information about this error, try `rustc --explain E0308`.
error: could not compile `tower-lsp` due to previous error

I am using: cargo 1.58.0 (f01b232bc 2022-01-19)

Any help or explanation would be appreciated.

Refactor Printer

I didn't really know how to name this issue, but currently there are some things that could be improved in delegate/printer.rs.

Duplicated code for getting the next request id

First it would be nice to get the request id number in one function and not have it duplicated in register_capability, unregister_capability, apply_edit.

The simplest solution would probably by to just put it a separate method (Printer::get_next_request_id()) in Printer, but I feel like it would be better to get the request id in the function/method that generates the actual request (currently make_request()).

Redundant serialization and deserialization in make_request and make_notification

Second, I would be curious if we could skip the serialization, deserialization then serialization of params in make_request and make_notification.

Depending on the input params, I think that the problem could be solved with some string manipulation (serializing the rpc request, params and then inserting params into the serialized request), or by defining custom structs or one enum (RegisterCapabilityRpcRequest, ApplyWorkspaceEditRpcRequest) that take the right parameter (e.g. RegistrationParams, ApplyWorkspaceEditParams, UnregistrationParams, etc.) and serialize directly.

The second option seems quite doable since bothmake_request and make_notification return String, with the usual downside of extra maintenance (of the said structs) to keep up with lsp_types and jsonrpc-core. The first one comes with the risk of introducing bugs and is really hacky, but it could probably be hacked into the current implementation with minimal changes and maintenance afterwards.

Edit: Maybe implementing a trait RpcServerRequest : Serialize for the lsp types (RegisterCapability, UnregisterCapability, etc) that offers a uniform interface for serialization (hard code rpc version in a default implementation and take id as param in the actual serialization method) would be the cleanest solution.

Progress on #13 probably affects this issue, since it also involves changes to printer. Depending on the discussion here and the status of #13 I'd be happy to help with the refactoring.

tower-lsp in the web

Hi,

I am trying to setup a website which has both an editor (https://github.com/TypeFox/monaco-languageclient) and language server (tower-lsp) running concurrently on the client-side in the browser. If possible it would be nice for portability to run everything inside WebAssembly or Javascript.

An example of how to implement the language server on the javascript side is: https://github.com/TypeFox/monaco-languageclient/blob/98f15c6499f23c853882c557dc18ccf7406ed1cf/example/src/json-server.ts#L18-L23

This looks very similar to how it's done in tower-lsp:

#[tokio::main]
async fn main() {
env_logger::init();
let stdin = tokio::io::stdin();
let stdout = tokio::io::stdout();
let (service, messages) = LspService::new(Backend::default());
Server::new(stdin, stdout)
.interleave(messages)
.serve(service)
.await;
}

From discussions on the Tokio Discord, Tokio cannot be compiled to WebAssembly since it has dependencies to unix. Do you see any other possibilities down the line to run tower-lsp in the browser?

The reason behind writing the server in Rust and compiling it to WASM, instead of writing a server in Javascript that makes calls to handlers in Rust, is that I it should be easier to keep state between compilations.

Remodel service as bidirectional stream

Problem statement

Currently, LspService adopts a unary request/response model where <LspService as tower::Service>::call() accepts a single Incoming message and sends an Option<String> response back.

The basic service definition looks like this:

impl tower::Service<Incoming> for LspService {
    type Response = Option<String>;
    type Error = ExitedError;
    type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;

    fn call(&mut self, message: Incoming) -> Self::Future { ... }
}

This model is a poor fit for a Language Server Protocol implementation for several reasons:

  1. LSP is bidirectional, where the client and server can both send requests and responses to each other. The unary request/response model imposes a strong client-to-server ordering on the LspService when there really shouldn't be one, leading to hacks such as MessageStream and wrapping each outgoing message in an Option to get the behavior we want.
  2. Client-to-server communication cannot be polled independently of server-to-client communication, since they are both being funneled through the LspService::call() method. This method must be called repeatedly in order to drive both independent streams of communication, which feels awkward to use and write tests for.

Proposal

For these reasons, I believe it would be better to adopt a bidirectional model structured like this:

impl tower::Service<MessageStream> for LspService {
    type Response = MessageStream;
    type Error = ExitedError;
    type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;

    fn call(&mut self, stream: MessageStream) -> Self::Future { ... }
}

With the above model, the call() method is only ever called once by the client, given a stream of incoming messages, producing a future which either resolves to Ok(MessageStream) if all is well, or an Err(ExitedError) if the language server has already shut down. Both message streams can then be polled independently from each other, in either direction, with no strict ordering. When the language server shuts down, the outgoing message stream will terminate and any subsequent attempt to call LspService::call() to produce a new one will immediately return Err(ExitedError).

User impact

This change would have a minor impact on users which rely on tower_lsp::Server to route communication over stdio. It will result in a slightly different initialization process:

Before

let (service, messages) = LspService::new(Backend::default());
Server::new(stdin, stdout)
    .interleave(messages)
    .serve(service)
    .await;

After

let service = LspService::new(Backend::default());
Server::new(stdin, stdout)
    .serve(service)
    .await;

This change would have a greater effect on users wrapping LspService in tower middleware in their own projects, since the service request and response types would have changed and would need to be handled differently.

Add lines about server capabilities on example

Hi. I tried to make a mock lsp server following the example in https://github.com/ebkalderon/tower-lsp/blob/4cf39815e7e9e82c96bb267144f17e7b25415db3/src/lib.rs , but its hover and completion didn't work at first.

I tried to update the capabilities in the initialize function, and it works.

    async fn initialize(&self, _: InitializeParams) -> Result<InitializeResult> {
        let mut res = InitializeResult::default();
        res.capabilities.hover_provider = Some(HoverProviderCapability::Simple(true));
        res.capabilities.completion_provider = Some(CompletionOptions::default());
        Ok(res)
    }

Seems like the doc should be updated? (Perhaps also the example on README.md?)

My environment is
macOS catalina10.15.7
neovim v0.6.1
lsp client is nvim-lspconfig

Automatic exit

I was a bit surprised to find a load of my tower-lsp language server processes still running long after VSCode had quit. I had assumed that VSCode's LanguageClient would automatically kill them, or maybe tower-lsp would handle it, but that does not seem to be the case.

Deno has a "solution" that basically polls if the parent process still exist and calls exit() if not. Seems ugly but workable. But I wonder if tower-lsp should take care of that for me?

Reduce message reparsing with lspower-based codec

I've been reading through the LanguageServerCodec implementation merged in from the lspower fork in #309, and I feel somewhat skeptical of the claim in the README.md that this new codec is more efficient and reduces message reparsing compared to the nom implementation that existed before (source).

The previous implementation used to a remaining_msg_bytes field to return Ok(None) immediately if the message payload wasn't entirely buffered into memory. By contrast, the lspower implementation attempts to blindly reparse the headers with httparse every time. I think should be pretty easy to add in a remaining_msg_bytes field into the new codec so that it truly runs as fast as it claims, though.

Also, I've identified another optimization that would also boost efficiency beyond both the original and lspower codec implementations: it would be nice if we could eagerly advance the src buffer past the HTTP headers once we're done parsing them, so that we don't need to re-run httparse each time the codec is still waiting for the payload to come through.

CC @silvanshade

Permit certain server-to-client notifications when uninitialized

Currently, all client-to-server notifications and requests in Printer are disallowed until the initialize request is responded to. However, according to the specification, the initialize request itself is permitted to send the following messages to the client before responding:

  • window/showMessage
  • window/showMessageRequest
  • window/logMessage
  • telemetry/event

As such, the following API changes should be made:

  1. LanguageServer::initialize() should be given access to the Printer.
  2. Printer should permit sending the messages listed above without checking if the server is initialized.

Consider merging with lspower and maintaining a single code base

Hi @ebkalderon, I wanted to start a discussion about merging some of the changes I made when lspower was forked and then deprecating that project so we can maintain a single code base. (I think you pinged me about this but I don't see where now so I'll just create this new issue).

I think the only significant changes since you've been updating tower-lsp again are:

  1. The use of async-codec-lite in order to facilitate wasm targets re: #187
  2. The use of httparse and twoway instead of nom. This is more efficient and reduces the number of dependencies.

A lot of the rest of the changes are either bug fixes or updates you've already made or stylistic changes which aren't important.

I could work on PRs for the above two and, deprecate lspower, and then help to maintain tower-lsp going forward if that is something that interests you.

Serve on TCP over stdin

Is it possible to listen on TCP over stdin? I believe this would make the development a bit easier to follow.

Since Server takes an AsyncRead + Unpin I thought it would be simple, but it only answers the first message and the client hangs. I am still noobish with async rust, sorry.

Minimal example:

#[tokio::main]
async fn main() -> Result<()> {
    env_logger::init();

    let mut listener = tokio::net::TcpListener::bind("127.0.0.1:9001").await?;
    let (stream, _) = listener.accept().await?;

    let stdout = tokio::io::stdout();
    info!("Starting generic LSP Server..");

    let (service, messages) = LspService::new(Backend::default());

    Server::new(stream, stdout)
        .interleave(messages)
        .serve(service)
        .await;

    Ok(())
}

Client request routing is broken

It seems that the server-to-client request routing introduced in PR #134 is broken. If you connect a text editor to the server.rs example and attempt to execute the dummy.do_something command, the server will attempt to send a workspace/applyEdit request back to the client, await the response from the client to be received, and then proceed to return its own response back to the client.

However, this does not work. Instead, the server blocks indefinitely awaiting the future for the workspace/executeCommand handler to resolve. This will never happen so long as the workspace/applyEdit response has been successfully routed back from the client, but because the server is blocking on the workspace/executeCommand future, there is nothing driving the incoming message stream forward. Therefore, the client's response is never received, and the entire server deadlocks.

The culprit appears to be this particular .await in stdio.rs which is preventing the next message from being processed before the previous one completes. This issue could be fixed by removing the .await and instead sending the future itself over the channel and have the printer task use StreamExt::buffer_unordered to drive multiple futures concurrently in a single thread, thereby allowing more incoming messages to be processed while one may be blocking on a response from the client.

Consider choosing a new name

Most Tower-based libraries in recent times, if not all, have dropped the tower-* naming convention of older libraries in favor of catchier and more unique names. For example, see tower-web and tower-grpc (old) vs. warp and tonic (new).

Should we consider a similar rename as well? If so, what should it be? Or would getting downstream projects to migrate be too painful? Why or why not?

SyntaxError: Invalid or unexpected token

Trying the example in the readme and https://github.com/ebkalderon/tower-lsp/blob/master/examples/stdio.rs I get the following in VSCode with what I think is a minimal extension:

/home/tec/Desktop/TEC/Projects/available/test-lsp/target/debug/lspserver:1
�ELF���
^

SyntaxError: Invalid or unexpected token
    at Module._compile (internal/modules/cjs/loader.js:942:18)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1051:10)
    at Module.load (internal/modules/cjs/loader.js:862:32)
    at Module._load (internal/modules/cjs/loader.js:774:14)
    at Function.Module._load (electron/js2c/asar.js:769:28)
    at Function.Module.runMain (internal/modules/cjs/loader.js:1103:10)
    at internal/main/run_main_module.js:17:11
[Error - 1:22:17] Connection to server got closed. Server will not be restarted.

This is an excerpt from my my Cargo.toml

[dependencies]
env_logger = "0.8.2"
serde_json = "1.0.60"
tokio = { version = "0.2", features = ["full"] }
tower-lsp = "^0.13"

[[bin]]
name = "lspserver"
path = "src/server/main.rs"

Can initialize be async as well?

Hello there!

I am currently trying to implement a language server using this crate and ran into a problem:

The InitializeParams hold the root_uri of the current project (testing with VSCode). I need to somehow store this value to index all the relevant files in that folder, but since the initialize method is not async, I can not aqcuire a lock to a tokio Mutex to store that value somewhere. I have been fiddling around with std::sync::Mutex but ran into the blocker that I will later on not be able to send the value I stored behind it into a Future, as std::sync::Mutex does not implement the Send Trait.

Maybe I am completely on the wrong path ... maybe you have an idea how to work around that issue? I assume initialize not being async has a reason ... if not ... maybe it could be changed to be async.
I am rather new to Rust, so I might be missing something obvious, too.

Convert to std::future and async/await

With async/await stabilizing in Rust 1.39.0, this library should be ported to use the new syntax and switch to std::future::Future. This should involve upgrading tower-service to version 0.3.0alpha.2 and switching away from jsonrpc_core to the brand-new jsonrpsee crate.

Unfortunately, jsonrpsee does not provide a stdio transport at the time of writing. Since this is a hard requirement for virtually all LSP servers, we will have to wait until support is implemented.

Avoid calling tokio::spawn internally

To help decouple tower-lsp more thoroughly from tokio, we should avoid calling tokio::spawn() internally. Instead, we should look into using executor-agnostic primitives such as join!() and possibly StreamExt::for_each_concurrent() to achieve similar behavior.

Client::configuration hangs

The following code works in 0.13.3 but hangs in and after 7af7102:

let config = self.client.configuration(vec![
    ConfigurationItem {
        scope_uri: None,
        section: Some("rant".to_string()),
    }
]).await;

In the trace I can see the configuration data come back, but the future never resolves.

(Since it's getting close to a year since any changes have been made to this repo, I wonder if maybe someone like @ralpha would be interested in an alt fork to collect updates and fixes in the meantime, e.g. #264.)

Switch from jsonrpc-core to jsonrpsee

As previously described in #58, the jsonrpc-core crate is currently in maintenance mode and the jsonrpsee crate is set to replace it in the future.

The jsonrpc-core has several design issues that make it less than ideal for our purposes, namely the dependency on old futures 0.1 and the lack of a bidirectional stdio transport, which is blocking #13 (see paritytech/jsonrpc#487 for details). jsonrpsee seems to have a much nicer API and is committed toward alleviating the existing crate's current structural issues.

As of pull request #101, we now support std::future and async/await through judicious use of .compat(), pervasive future boxing, a light dash of Arcs in a few places to keep jsonrpc-core happy. However, it would be nice to eliminate those hacks and work with an async/await native library such as jsonrpsee.

Unfortunately, jsonrpsee does not offer a stdio transport at all, much less one with bidirectional support, so we will have to wait until one is implemented upstream. Alternatively, if another JSON-RPC crate with equal or better ergonomics arises, we may switch tower-lsp to that instead.

Provide a way to access Client outside the callbacks

Currently tower-lsp works great for servers that use a task based model. But consider the case when
you have a file system watcher, as a process task/process that is idependent from the language server task.

let server = initialize_server(...);
let watcher = initialize_watcher(...);
futures::join!(server, watcher);

When the watcher sees a change, it reparses a file, does some analysis and sends the diagnostics to the client. The problem is how can it acces the Client that is normally provided by tower-lsp in the callbacks?

One way could be to set up the language server with a oneshot channel with the other process waiting on the other side and send the Client from the initialized method, but this seems a bit hacky.

What would be a much more elegant solution would be returning a clone of the client when the service is created, like this:

let stdin = tokio::io::stdin();
let stdout = tokio::io::stdout();

let (client, service, messages) = LspService::new(backend);
  // ^^^^^^ access to client
Server::new(stdin, stdout)
   .interleave(messages)
   .serve(service)
   .await;

Of course with the caveat that the client will panic if it's used before the server is initalized. But this can be solved by providing a method on the client that can be used to check if the server is initialized before sending a message.

As with other issues, this will be affected by #102 and related rewrite issues (#177, #58), so maybe it's worth waiting with this, as it's not a major inconvenience.

Add Support for Proposed Features

Hello, it would be great if we could have support for proposed features via a Cargo feature flag. This is currently what gluon-lang/lsp-types is doing to allow for using the latest features in the LSP specification version 3.17 (e.g. inlay hints). They simply block all proposed features to the spec under a proposed cargo feature without any guarantees on breaking changes under this flag.

How do you register file watchers?

The documentation says to use client.register_capability() but it isn't really clear how. There's also a DidChangeWatchedFilesRegistrationOptions struct in there but it isn't referenced from anywhere else. An example would be great!

Implement support for $/cancelRequest

Currently, LspService provides no mechanism for performing request cancellation via the implementation-specific $/cancelRequest notification (specification). This is not a required method for language servers to implement, but it would be nice to support in cases where the open workspace or programming language being supported is very complex.

One approach would be to add a pending_requests hashmap to LspService, using a similar approach to Client to implement tracking of pending requests. We could either use a oneshot channel or futures::future::abortable() to short-circuit a given LSP request if a $/cancelRequest notification is received first.

This feature is fairly low priority, especially when compared to #10, but it would be nice to have.

Consider switching from log facade to tracing

It might be nice to switch to the tracing crate due to its excellent async integration and the ability to trace the progress of individual futures. However, we should probably listen to user feedback to determine whether it provides enough value over log to warrant switching, since this has the potential to be a disruptive change. Anyone hold an opinion on this?

Add support for client requests

Currently, the Printer and associated MessageStream only support sending notifications from the server to the client.

This is due to both limitations in jsonrpc-core and design difficulties in making the Server and LspService bidirectional. Implementing this in a satisfactory way would allow the following server-to-client requests to be implemented:

Thankfully, these aren't quite as critical as the rest of the requests in the LSP specification, but it would still be nice to support them eventually for the sake of completeness.

How to proactively send commands

Hi, I'm a bit of a Tokio newbie (I know nothing about it whatsoever), and I was wondering how to send custom requests to the client proactively, in other words not in response to a request from the client. In my case I want to send an ExecuteCommand when some file changes.

I'm not really sure where to start though - can you give me any hints? I've added a method to my Backend like this:

  pub async fn start_python_debugging(&self, port: u16) -> Result<<ExecuteCommand as Request>::Result> {
    // https://code.visualstudio.com/api/references/vscode-api#debug
    self.client.send_custom_request::<ExecuteCommand>(ExecuteCommandParams{
      command: "vscode.debug.startDebugging".to_string(),
      arguments: todo!(),
      work_done_progress_params: WorkDoneProgressParams { work_done_token: None },
    }).await
  }

And I'm going to use the Notify crate which has an async example, but how do I connect them?

Compare tokio::sync and futures::channel

As raised in #134 (comment), there exist two competing async oneshot and MPSC channel implementations: tokio::sync and futures::channel. They provide nearly identical public APIs but their implementations differ quite a bit. It is currently unknown which one is the most efficient, and perhaps it might be useful to benchmark the two.

At the time of writing, tower-lsp uses the futures channel implementation internally. We could switch to the tokio implementation if it provides better performance, but note that it would also introduce even greater coupling to the tokio executor over other available options.

How to create a pure client?

tl;dr I'm looking for something like the createConnection() function in vscode-languageclient:

https://github.com/microsoft/vscode-languageserver-node/blob/901fd8829ee0af8f457bc2a2b1ab5910abd669f6/client/src/common/client.ts#L169

I feel like a lot of the pieces are here to make a "pure LSP client," but I am unclear as to whether it's possible today. By a "pure client," I mean to assume you have an existing LSP server that can be treated as a black box. The caller is responsible for producing a bidrectional channel (often this is achieved by launching the LSP server and connecting stdin/stdio), which should be used to instantiate some sort of Client object. The vscode-languageclient npm module is a good example of a proper API+implementation for this:

https://github.com/microsoft/vscode-languageserver-node/blob/master/client/src/common/client.ts

As it stands, tower_lsp::Client::new is private and the only way to create one seems to be via LspService::new(), which requires returning an implementation of tower_lsp::LanguageServer. Assuming someone else has implemented the LSP server, why should I be required to implement LanguageServer in order to connect to it?

Improve Client ergonomics

Most callbacks provide a client parameter, but some don't (e.g. goto_definition). Is there a reason for that? It makes using the client.log_message() method awkward.

Add an example for handling state

I think an example with a recommended method to handle state would be very useful (since it's what a real use case would probably look like).

I think simply printing the open files in the workspace every time a file is opened (did_open) or closed did_close and counting the number of calls to did_change would be enough to get started.

The main reason for this is that only &self is available in the methods, so something straight forward like this

struct SourceFile {
     // ...
}

struct Backend {
     files: Vec<SourceFile>
}

impl LanguageServer for Backend {
    // ...
    fn did_open(&self, printer: &Printer, params: DidOpenTextDocumentParams) {
        let file = SourceFile::new(...);
        self.files.push(file);
    }
    // ...
}

won't compile because Vec::push() requires a mutable reference to &self.

It would be also nice to recommend something that can make use of the fact that the server has async support.

Regression in parsing small chunked messages in codec

As of merging #316, the new LanguageServerCodec implementation is no longer able to parse small chunked messages properly. The following minimal reproducible example succeeds with 0.15.1, but not in master currently:

#[test]
fn decodes_small_chunks() {
    let decoded = r#"{"jsonrpc":"2.0","method":"exit"}"#;
    let content_type = "application/vscode-jsonrpc; charset=utf-8";
    let encoded = encode_message(Some(content_type), decoded);

    let mut codec = LanguageServerCodec::default();
    let mut buffer = BytesMut::from(encoded.as_str());

    let rest = buffer.split_off(40);
    let message = codec.decode(&mut buffer).unwrap();
    assert_eq!(message, None);
    buffer.unsplit(rest);

    let rest = buffer.split_off(80);
    let message = codec.decode(&mut buffer).unwrap();
    assert_eq!(message, None);
    buffer.unsplit(rest);

    let rest = buffer.split_off(96);
    let message = codec.decode(&mut buffer).unwrap();
    assert_eq!(message, None);
    buffer.unsplit(rest);

    let decoded: Value = serde_json::from_str(decoded).unwrap();
    let message = codec.decode(&mut buffer).unwrap();
    assert_eq!(message, Some(decoded));
}
$ cargo test decodes_small_chunks
   Compiling tower-lsp v0.15.1 (/home/ekalderon/Documents/tower-lsp)
    Finished test [unoptimized + debuginfo] target(s) in 18.98s
     Running unittests (target/debug/deps/tower_lsp-db32f97915d141a8)

running 1 test
test codec::tests::decodes_small_chunks ... FAILED

failures:

---- codec::tests::decodes_small_chunks stdout ----
thread 'codec::tests::decodes_small_chunks' panicked at 'range end index 33 out of range for slice of length 1', src/codec.rs:233:28
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    codec::tests::decodes_small_chunks

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 25 filtered out; finished in 0.00s

error: test failed, to rerun pass '--lib'

The issue appears to be caused by src/codec.rs#L233, where previously the length was checked before slicing but is no longer (source).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.