Git Product home page Git Product logo

distant's People

Contributors

chipsenkbeil avatar westernwontons avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

distant's Issues

Support transfer command

A friendly wrapper of reading and writing a file, where we specify a remote and local just like with scp.

distant transfer path/to/local example.com:path/to/remote
distant transfer example.com:path/to/remote path/to/local

Support disconnected ttl

When listening, it may be a good idea to support automated shutdown if no client has been successfully connected for N seconds. The idea here is to not leave the process around if we have lost the auth key. This would work especially well with #9 and #10 where we have isolated keys and once those are gone there should be no way to access the server.

DistantCodec may be unnecessary

I wrote this near the beginning of this project, and assumed that it was for handling incomplete messages. Thinking through this, the codec may not make sense as getting an incomplete request or response and trying to wait on more data when more than one connection could send other data or requests just sets up for failure. I also don't think that's actually how the networking works.

Instead, the codec would be used to

  1. Make sure that our request or response that is serialized (and optionally encrypted) gets broken up into frames that are at a size that fits in our transport. From there, we would include in our codec's encoding an id, index, and count to keep track of incoming data that is part of a larger request/response. In the decoding, any completed message would then be returned. Question is, is this necessary?
  2. We could move the authentication and encryption to the codec itself as Option<Arc<SecretKey>> as each individual collection of bytes would be encrypted and signed/decrypted and verified if we're breaking it up into pieces.

Refactor client & server logic to standalone modules

Currently, the majority of logic for the server and client are bolted to the subcommands. We can see this causing code smells with sharing the interactive loop between launch and action subcommands.

This also makes it more difficult to test certain pieces in an integration fashion (not e2e). It's also difficult to keep straight the different tasks that are spawned and managing them.

These should be refactored to be standalone structs that capture the full suite of tasks, support shutdown, etc. This is also important prior to the first stable release.

Support defaulting launch & proc-run to use $SHELL with option for --no-shell otherwise

At least for launch (proc-run may not be necessary), being able to take advantage of the path assigned to a user would be handy.

Testing with ssh <host> echo '$SHELL' resulted in the right shell being printed, so that environment variable is available. It looks like the majority of shells support <shell> -l for login and <shell> -c <command> to run a command as non-login. Doesn't look like you can run a login shell and execute a command, but I don't think a login shell is necessary to get the path configured for the specific user as non-login will still source files like ~/.bashrc and others.

Anyway, idea here is to default to attempting to run distant over ssh using $SHELL -c distant ... to have appropriate environment variables established. If --no-shell is provided, then we wouldn't wrap in a $SHELL call.

Doing this for proc-run may not make sense as any environment given to the distant binary is passed on to its children; so, just providing launch with $SHELL should pass on the results to the children as well.

Full shell support

distant shell would be the command to launch a connection to the remote machine.

In terms of manipulating the local tty, we could use crossterm or termion, maybe? Not sure how to simulate the buffer on the other side where the programs are being run (like vim). Would need to look at mosh as an example.

Child procs do not exit when a client terminates that is connected to a unix socket

When using launch with the unix socket as the communication medium with distant action calls, there is an indirect connection that goes:

action -> forked launch -> listen

Because launch maintains a single client connection to distant listen, there is no TCP stream disconnection on the server, which is normally how it knows to kill any lingering processes associated with a client.

To fix this, the simplest solution I can think of is to keep track of proc ids in the launch process and associate them with a client connected via a UnixStream. Once that stream ends, the launch process will send kill requests to the server for all of the associated processes.

Support fork-like experience for Windows

When launching our server, we run distant listen --daemon ... to have the process detach from SSH and continue running. With Windows, there is no true fork that is reliable and fast. Instead, we will need to spawn a child process and pass all arguments (other than daemon). The child process will need to pipe back over stdout the credentials for the primary process to print out.

See https://stackoverflow.com/questions/52580384/fully-detach-childprocess-in-rust-on-windows for discussion. Seems like we need to run the process directly instead of via powershell or something else to ensure that it detaches and is not killed when the parent exits.

Other challenge is figuring out how to test this. We might need another CI just for Windows without WSL.

Blocks #51.

Make cli binary optional feature

In prep for the 1.0 milestone, we need to support separating the cli from the library. This first depends on #24. but also needs to be able to exclude dependencies like StructOpt that are only useful for the binary.

This means that we would need a feature that indicates that we are building the binary so we can choose to include the StructOpt macro on our data.

Two possible choices:

  1. Keep everything in a single source, have a feature for cli, and go through marking all uses of those libraries as optional
  2. Move core to distant-core crate, but still have feature that supports including structopt for the data. The binary in turn would import the core library with the structopt feature enabled such that the data supports it.

Implement unit, integration, e2e, and stress tests

Compared to previous projects where I spent weeks to months writing tests, I wanted to quickly push this out and managed to get the minimum functionality + proper encryption & authentication in under a week.

For stability, it would be nice to at least support some integration tests to make sure the CLI doesn't break unexpectedly.

Core

  • Lsp Request Parser & Session Creator
  • Transport
    • Abstraction (send/receive)
    • Handshake
    • Distant Codec
  • Client
    • send
    • send_timeout
    • fire
    • broadcast receiver
    • wait
    • abort
  • Session
    • From environment
    • From stdin
    • Resolving host
    • Parsing DISTANT DATA <HOST> <PORT> <KEY>
    • Saving to disk
    • Loading from disk
  • State
    • Client cleanup
    • Process pushing
  • Utils
    • ConnTracker (exceeded timeout logic validation)
    • Shutdown task (resetting itself, not shutting down when conn exists, sending shutdown signal when no conn exists)
    • StringBuf (consuming full lines, consuming nothing when no full lines available)

Cli

  • Launch
    • Save session to file
    • Printing session to stdout
    • Running a unix socket daemon
    • Running interactively (like interactive action)
    • Verifying server process spawned on remote machine
  • Action
    • Using environment session
    • Using file session
    • Using pipe session
    • Using lsp session
    • Using unix socket daemon
    • Sending kill signals for clients that disconnect when in unix socket mode
    • Acting as a proxy process
    • Supporting multiple tenant routing when in unix socket mode
    • Sending single command (shell mode)
    • Sending commands interactively (shell mode)
    • Sending single command (json mode)
    • Sending commands interactively (json mode)
    • Sending batch of commands (json mode)
    • Exit status reflection
    • Shutdown capability
    • Output matches expected for each response type (shell mode)
    • Json matches format for a response & batch of responses (json mode)
  • Listen
    • Supports single connection
    • Supports multi-connection communicating at same time
    • Able to shut down when get timed notification
    • Properly kills procs tied to disconnnected clients
    • Handle success & failure cases of FileRead
    • Handle success & failure cases of FileReadText
    • Handle success & failure cases of FileWrite
    • Handle success & failure cases of FileWriteText
    • Handle success & failure cases of FileAppend
    • Handle success & failure cases of FileAppendText
    • Handle success & failure cases of DirRead
    • Handle success & failure cases of DirCreate
    • Handle success & failure cases of Remove
    • Handle success & failure cases of Copy
    • Handle success & failure cases of Rename
    • Handle success & failure cases of Exists
    • Handle success & failure cases of Metadata
    • Handle success & failure cases of ProcRun
    • Handle success & failure cases of ProcKill
    • Handle success & failure cases of ProcStdin
    • Handle success & failure cases of ProcList
    • Handle success & failure cases of SystemInfo

Stress Tests

  • Figure out how server responds when getting hit with large number of connections
  • Figure out how client & server respond to large messages (big file read/write)
  • Figure out why LSP is dying w/ exit code 74 when being proxied (seems to happen after a little time for neovim)
    • Doesn't appear to be an issue anymore post refactoring and using the LSP command
  • Facilitate network outages and determine expected response
  • Kill proc managed by server and ensure that it communicates death & cleans up process
  • Kill server and see how client responds in different modes w/ and w/o timeout configured

cargo build issue on Apple Silicon M1 Mac

>> cargo build --release

...
   Compiling futures v0.3.17
   Compiling tokio-util v0.6.8
   Compiling serde_cbor v0.11.2
   Compiling distant-core v0.14.2 (/Users/reportaman/Downloads/distant/core)
error[E0658]: use of unstable library feature 'str_split_once': newly added
   --> core/src/client/lsp/data.rs:280:47
    |
280 |             if let Some((name, value)) = line.split_once(':') {
    |                                               ^^^^^^^^^^
    |
    = note: see issue #74773 <https://github.com/rust-lang/rust/issues/74773> for more information

error[E0599]: no variant or associated item named `Unsupported` found for enum `std::io::ErrorKind` in the current scope
   --> core/src/data.rs:632:28
    |
632 |             io::ErrorKind::Unsupported => Self::Unsupported,
    |                            ^^^^^^^^^^^ variant or associated item not found in `std::io::ErrorKind`

error[E0599]: no variant or associated item named `OutOfMemory` found for enum `std::io::ErrorKind` in the current scope
   --> core/src/data.rs:633:28
    |
633 |             io::ErrorKind::OutOfMemory => Self::OutOfMemory,
    |                            ^^^^^^^^^^^ variant or associated item not found in `std::io::ErrorKind`

error[E0658]: use of unstable library feature 'str_split_once': newly added
  --> core/src/server/port.rs:67:17
   |
67 |         match s.split_once(':') {
   |                 ^^^^^^^^^^
   |
   = note: see issue #74773 <https://github.com/rust-lang/rust/issues/74773> for more information

error: aborting due to 4 previous errors

Some errors have detailed explanations: E0599, E0658.
For more information about an error, try `rustc --explain E0599`.
error: could not compile `distant-core`

To learn more, run the command again with --verbose.

~/Downloads/distant master 19s
โฏ neofetch
                    'c.          [email protected]
                 ,xNMM.          ------------------------------
               .OMMMMo           OS: macOS 11.6 20G165 arm64
               OMMM0,            Host: Macmini9,1
     .;loddo:' loolloddol;.      Kernel: 20.6.0
   cKMMMMMMMMMMNWMMMMMMMMMM0:    Uptime: 4 hours, 1 min
 .KMMMMMMMMMMMMMMMMMMMMMMMWd.    Packages: 1 (brew)
 XMMMMMMMMMMMMMMMMMMMMMMMX.      Shell: zsh 5.8
;MMMMMMMMMMMMMMMMMMMMMMMM:       Resolution: 3840x2160
:MMMMMMMMMMMMMMMMMMMMMMMM:       DE: Aqua
.MMMMMMMMMMMMMMMMMMMMMMMMX.      WM: Quartz Compositor
 kMMMMMMMMMMMMMMMMMMMMMMMMWd.    WM Theme: Blue (Dark)
 .XMMMMMMMMMMMMMMMMMMMMMMMMMMk   Terminal: tmux
  .XMMMMMMMMMMMMMMMMMMMMMMMMK.   CPU: Apple M1
    kMMMMMMMMMMMMMMMMMMMMMMd     GPU: Apple M1
     ;KMMMMMMMWXXWMMMMMMMk.      Memory: 2003MiB / 16384MiB
       .cooc,.    .,coo:.






Support ssh client tunneling natively

Via something like channel_direct_tcpip, avoid the need for other ports to be allocated by launching a program directly via the ssh library and then tunneling to it through SSH.

In terms of async support, it seems that the existing libraries have been removed as recommendations as described here: alexcrichton/ssh2-rs#224

We'd need to build something simple like threaded message passing to send/receive data. From reading the documentation, we can set it to be nonblocking. Sessions need to avoid concurrent access, so we'd want both reads and writes to exist in a singular thread that uses message passing to pass data around.

Add support for server lifetime

Terminates client/server after some period of time, regardless of active connections. Designed for security at work. Default should be 8 hours.

Provide some saner defaults

  • Launch, if using a socket by default, should also run in the background by default. As the --daemon flag only applies to socket and socket is only available on UNIX-based systems, this shouldn't conflict with forking that is also only available on UNIX systems
  • Timeout should have something like 3s by default with the option to disable timeout, rather than having an unlimited timeout. It's rare that someone using the CLI would want to have the process hang forever

Support batch commands

It can get expensive to do individual network requests. Support for batching multiple requests (only via JSON) would be ideal.

Shaky tests for cli::action::proc_run

It seems like an extra message is being sent to the client after the process has completed. At least, that is my assumption as otherwise the channel for the session shouldn't be closed. I ran this test on my local machine 500 times in rapid succession and could not encounter this problem. For now, marking in an issue to keep track as it surfaces every so often on Github actions.

for i in {1..500}; do cargo test proc_run::should_support_json_to_capture_and_print_stdout; done

proc_run::should_execute_program_and_return_exit_status

As seen in https://github.com/chipsenkbeil/distant/runs/3601723633:

---- cli::action::proc_run::should_execute_program_and_return_exit_status stdout ----
-------------- TEST START --------------
thread 'cli::action::proc_run::should_execute_program_and_return_exit_status' panicked at 'Unexpected stderr, failed diff var original
โ”œโ”€โ”€ original: 
โ”œโ”€โ”€ diff: 
--- value	expected
+++ value	actual
@@ -0,0 +1 @@
+ERROR [distant_core::client::session] Failed to trigger broadcast: channel closed

โ””โ”€โ”€ var as str: ERROR [distant_core::client::session] Failed to trigger broadcast: channel closed


command=`"/home/runner/work/distant/distant/target/debug/distant" "action" "--session" "environment" "proc-run" "--" "bash" "/tmp/.tmpneRvBj/exit_code.sh" "0"`
code=0
stdout=```""```
stderr=```"ERROR [distant_core::client::session] Failed to trigger broadcast: channel closed\n"```
', /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/assert_cmd-2.0.0/src/assert.rs:124:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

proc_run::should_support_json_to_capture_and_print_stdout

As seen in https://github.com/chipsenkbeil/distant/runs/3602125023:

-------------- TEST START --------------
thread 'cli::action::proc_run::should_support_json_to_capture_and_print_stdout' panicked at 'Unexpected response: ProcStdout { id: 14778104203985831116, data: "some output" }', tests/cli/action/proc_run.rs:215:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error: test failed, to rerun pass '--test cli_tests'

Support unix domain sockets to keep auth secret in launch program

Rather than writing the auth secret to a file, which enables any program under the same owner to read the secret, we could offer on Mac, Linux, FreeBSD, and other Unix-based OS the ability to have the program client-side that launches the server maintain the secret and act as a proxy being subsequent clients and the remote machine.

The clients would simply look for the unix socket first before seeking a session file. The launch command, if configured to use a unix socket (default for Mac/Linux/etc. and unavailable on Windows), would listen for connections from other clients and forward along the data. We'd not even need to understand the messages as they just get passed directly to the server after being encrypted and auth'd by the proxy.

Consider lsp subcommand to properly handle communication

Could bring in https://docs.rs/lsp-types/0.89.2/lsp_types/index.html to make sure we parse correctly. At the simplest, we would be intercepting requests and responses, replacing any local URIs with distant:// when receiving responses and distant:// with file:// when sending requests.

Would make resolving chipsenkbeil/distant.nvim#23 on distant.nvim a lot easier versus needing to replace all handlers with editor.open calls. Or, that's the idea. Unclear to me which path would be a better fit.

Support stdin post-launch

Normally, distant launch exits by creating a session file or - in the future - remaining while listening to a unix socket (see #6).

An even more secure route is to have launch not exit but instead listen for data to send to the remote endpoint over stdin, feeding results back via stdout/stderr. This is just like distant action --interactive but will never expose any session data.

Support retry logic for clients

For interactive clients, it would be useful to support retry logic versus immediate death when the network connection with the server is severed.

With UDP, there's no connection, so we don't worry about it. For TCP, there is a connection.

Note that running processes that are disconnected would also be killed by the server as part of connection cleanup.

Support ssh2 transportation to not require distant server binary

Looking through ssh2-rs, we may be able to implement all of the current features of distant purely from a mixture of sftp and exec. There are a couple of gotchas that I think should be okay:

  1. We have to provide a mode like 644 for file operations (but capable of read/write/append for files over sftp channel)
  2. Reading a directory with a depth > 1 will require multiple calls (expensive)
  3. Running a proxy for a process is a bit different; I think I use exec and then can read stdout from the channel (see discussion), write stdin to the channel, and read stderr from this
  4. Still need to manage a proc list, but it would map to a series of channels, each dedicated to running a singular process
  5. System info would probably be unsupported as we're using info baked into our Rust binary instead of reading directly from the syste

Authentication may be the hardest part and there's some good discussion here regarding KeyboardInteractivePrompt and userauth_password.

Support unix socket auth key

Right now, when we launch and produce a daemon that listens to a unix socket to forward requests, the daemon does not use an auth key at all to verify requests. This means that anyone on the machine with access to the socket can talk to it, which makes the socket less ideal for security reasons.

It would be good to have the option to provide a socket auth key in the same manner that a session auth key can be provided, at least in the form of the pipe option when creating the daemon.

The reason for this is that we could then use neovim on through a singularly connection when leveraging a client and LSPs by having them all go through the unix socket instead of spawning direct clients.

LSP content not captured properly

The main goal of enabling distant action proc-run -- ... was to support passing stdin/stderr/stdout in a way that worked as a proxy for LSP clients.

The spec is https://microsoft.github.io/language-server-protocol/specifications/specification-current/

Turns out, it works to pass the header portion that includes newlines, but I believe that it fails to acquire and send the JSON content. My guess is that the content doesn't end with a newline to be sent. If this is the case, it means I need to revert the work that was done to send entire lines and instead see if I can still support sending content that does not have line endings.

Support session from lsp initialization params

To support neovim plug-in, we need way to get a session securely. For normal client, the stdin option is most secure, but neovim native lsp doesn't provide control to send a stdin line before initialization.

Instead, we should parse an initialization request first and extract out a session included in initialization params before forwarding it along.

Should be included in initializatianOptions defined in to specification. Those options will be removed from the total provide as if they were never there.

Simplify encryption and auth

Because we're using XChaChaPoly1305, this does both encryption and authentication. Because of that, we don't need blake256 for auth or the expensive key exchange process. This would also simplify our setup as we no longer need a handshake to setup each connection.

Support optional Lua feature for core & cli

If the feature is enabled, then we can support shipping Lua functions via string.dump from a client and loadstring to load the function on the server side. Enabling the feature would include RequestData::Lua { chunk: String } and ResponseData::Lua { result: String } where the chunk represents a function encoded using string.dump and is expected to be a function that returns a string.

mlua seems better maintained with more diverse options for configuration (selecting lua version, vendored, async, etc) than rlua.

Reason for this feature is that it would make supporting certain features of distant.nvim smoother like being able to perform local filesystem operations to search for a root pattern.

Support optional encryption at API level

For our TCP and unix sockets, this is still being done via a handshake automatically. In the case of our inmemory stream, we don't need to be doing encryption given that it's just acting as message passing within the same application.

The inmemory stream is currently just used for tests, but the plan is that we can stand up an SSH client to act as a pseudo-server, where a session interacts with the ssh client in the same way that it would interact with a distant server. A request is sent, the ssh client translates that into some action to perform remotely, and then sends back an appropriate response. This is all done in-process, but means we don't have to create a unique api just to support ssh.

Improve exit code reporting

Currently we report the exact same exit code when any error is encountered. Ideally, we should be able to provide a way to distinguish a couple of different errors, but the error being returned is a Box<dyn Error>, so that'll need to be refactored.

This is useful for the distant.nvim plugin so it can determine how to respond to a distant action failing. It also might be handy to have the shell mode for a singular process mirror the exit code of its process.

Channel lagging on client broadcast stream causes problems

When proxying the LSP server, it seems that the output over stdout can be really, really large. This causes a large volume of stdout messages to be sent and it seems like our client is unable to process them fast enough, so we eventually get lag as part of a recv error (see error).

Currently, this error causes our loop to exit and the process eventually dies. I kind of like this as losing data is bad for stdout. The naive solutions are to raise the broadcast cap, increase the max size of each message that can be sent over the wire, and increase the buffer used to read stdout/stderr before sending over to our client. 1k may be too small for reading stdout/stderr. If it's a question of TCP frame size, which we aren't even worrying about right now, we could go with something like the maximum allowed over TCP or something like that if needed. See size limit discussion here: https://stackoverflow.com/questions/2613734/maximum-packet-size-for-a-tcp-connection

From my old over-there project, I'd found maximum size recommendations:

mod tcp {
    /// Maximum Transmission Unit for Ethernet in bytes
    pub const MTU_ETHERNET_SIZE: usize = 1500;
    
    /// Maximum Transmission Unit for Dialup in bytes
    pub const MTU_DIALUP_SIZE: usize = 576;
}

mod udp {
    /// IPv4 :: 508 = 576 - 60 (IP header) - 8 (udp header)
    pub const MAX_IPV4_DATAGRAM_SIZE: usize = 508;
    
    /// IPv6 :: 1212 = 1280 - 60 (IP header) - 8 (udp header)
    pub const MAX_IPV6_DATAGRAM_SIZE: usize = 1212;
}

Split into distant-core and distant crates

This will reduce the dependencies brought in by core and could help resolve #25 by instead separating out the core logic from the cli.

This also would simplify testing where we would not require an e2e feature as the cli integration tests would all be e2e whereas the core integration tests would not need to have ssh support.

Interactive/batch command mode

Currently, you run one command at a time via distant send. Support for not closing the connection after one command would be ideal to avoid the cost of producing shared secrets for encryption, which can be expensive.

Fortunately, with the current design of the program, there is already an id associated with messages to distinguish themselves AND a process id included with all stdout/stderr. So keeping track of responses should be doable (via JSON).

Windows support without WSL?

Do you plan to ever support Windows without using WSL?

It would be wonderful to launch on my Windows instance in KVM and run distant.nvim from the Linux host, the exact opposite of WSL.

What is currently preventing a native Windows build?

Support remote process & lsp remote process spawn without consuming session

Similar to how SSH has channels that all persist within a single connection, a better design for distant would be to not consume a session entirely with a singular remote process. Instead, we need to have a router that can support forwarding responses to the appropriate channel. Stdout, stderr, and done responses have no origin id since they are not targeting a callback. At the moment, only proc messages with data tied to a proc's id need to be forwarded. To that end, I think it's fine for us to build a routing layer dedicated to remote processes.

First thought is to have session support spawning mpsc channel instances dedicates to proc ids. When a message comes in, we inspect if it is ProcStdout, ProcStderr, or ProcDone. In the case that it is, we look up within the session to see if there is an appropriate proc channel to send, otherwise it goes through the broadcast channel.

Because we also need to be able to send messages for fire(...) calls and we cannot directly clone the transport write half, we may need to convert a Session into a Router that spawns tokio tasks to send and receive using a Session's read and write halves. From here, the router would support creating a new process channel as well as broadcast channel. We could also support callbacks still using a oneshot.

Support session being passed in via stdin

For distant action, this normally loads a session from a file (highly insecure). While we can move the auth key to another program that communicates over unix sockets (see #6), it still allows any program to speak to it and send data using auth. All it would do is protect the auth key from being shared, meaning invaders would need access to the computer itself.

The next level would be to have distant action have a flag to read in via stdin the session data before perform an action, which means that the data is never stored and is instead communicated by another program.

CLI proc-run is broken for json format

Tests are missing as assert_cmd won't work as stdin is closed after writing the first json message, so the program doesn't wait to get the remaining json messages. We need to write a custom process execution instead that waits for responses.

The actual change to support json format is really simple. We add a conditional to the cli remote process link branch to only execute it if using shell mode. Otherwise, we default to the other types and assume interactive is being used.

Some tests are flaky on CI

I'm seeing two kinds of failures:

  1. Timing issues that I'm assuming are because I'm waiting X milliseconds for something to happen such as appending a file being flushed or a task shutting down. For these, we either need to increase the timeout to be much longer (using tokio::select to validate?) or remove the timeout and wait indefinitely for the change to occur.
  2. Process exit stderr being polluted by an error log about broadcast channel being closed. Ideally, we can make sure that no unexpected logs appear, but this seems to happen when some terminations happen quickly (or is it slowly?). An alternative would be to run in quiet mode to avoid any log output. That might be a good idea for proc-run as mixing log information would be a bad decision. So, for processes, maybe we should make it such that the logger will ONLY log to a file and not log anything otherwise?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.