Git Product home page Git Product logo

Comments (25)

 avatar commented on May 25, 2024 4

Did you push your proof-of-concept code anywhere where I could read through it?

Found the code and fixed it up to build and run (maybe)
https://github.com/breja/veonim_no_electron-v1
https://github.com/breja/veonim_no_electron-v2

btw, the files under runtime/(darwin|linux|win32)/vvim are binary apps that allow you to type vvim some-file.txt in the app and have it open that buffer in the current window - this is a way to open up files from the terminal without nesting neovim. The way it works is it takes the process args and makes an http call to https://github.com/smolck/uivonim/blob/master/src/services/remote.ts.

There are a few problems with this:

  1. The binary files are opaque in the source tree (suspicious?). They were written in Nim, but I lost the source. If you care to keep the functionality, it may be better to replace them with shell scripts for each target operating system.
  2. There is probably a better way to "listen" for events coming from the shell script (named pipes maybe?). HTTP server is a little too much and may be a security risk.
  3. Sorry for the lousy implementation 😞

from uivonim.

 avatar commented on May 25, 2024 2

Compatibility. I don't want to have to support all sorts of browsers and browser versions. With electron, this is of course not an issue, since it's just a single browser under the hood. In practice, this might not be that big of an issue, but I'm not sure.

Yes this is ultimately why Slack switched from webview to Electron. https://news.ycombinator.com/item?id=18763449

Side note: As the Slack dev mentioned, the memory usage of webviews could be deceiving. It probably warrants more research. On MacOS if you open up a webview and lookup the process in Activity Monitor, I believe you find 3 processes which total about ~30MB. But I suspect there could be other memory usage under different system processes. 30MB seems too good to be true - even native app frameworks hello world examples have a hard time staying in that low memory usage bracket.

Compatibility was a concern for me too. I decided from the beginning that I was not going to support old stuff. Chromium/Edge (Windows) + WebKit (MacOS/Linux) I think are reasonably modern-ish so I considered them supportable - especially with JS tooling that can smooth over some of the inconsistencies. For everyone else there would be a "compatibility-mode" Electron build that would have the same client/server code as the webview package.

Thus by default users would get the benefits of a smaller footprint with webview, and if their system did not support the webview version they would have to decide between the Electron version, upgrading their system, or using alternative software. I suppose you could invert this model and have an Electron build by default, and a webview version for those who want a lighter setup. Or maybe only Electron + web browser for remote functionality.

Ultimately it's up to you if a webview version is worth investing time in. Even though everyone likes to complain about Electron, it does have some benefits.

Personally where I ended up with the client/server/webview experiments is that it doesn't buy much except as a deterrent against people who complain about Electron. I think a bigger value of client/server might be in attach/detach modes and remote functionality, although I have reservations about remote too (neovim remote is latency sensitive and a bad experience outside of local containers and wired LAN connections).

Packaging. You mention this in the READMEs of your proof-of-concept repos, and it's definitely something to consider. ATM, packaging is super straightforward with electron, and if a significant amount of friction is added after removing it that's definitely a downside. Of course, I don't expect an electron-less solution to be as easy to setup/package as the current setup, but I'd like it to be as straightforward as possible.

Yeah, this is a bit tricky. For some reason I really thought it would be cool to have a single binary that would have everything in it. I explored bundling the Node runtime, the JS files, and the native webkit code all together somehow.

I tried various node libraries similar to:

I also tried adding the nodejs binary into the executable:

I even tried building node.js from source and statically linking it in the binary (absolute mad lad)

There are more sane webview packaging tools that I've seen show up on HackerNews/Reddit from time to time, but I haven't tried any:

There's also the fact that some people don't want to use a client/server setup, but just want to be able to open an editor real quick and close it after they're done. Like emacs vs. emacsclient. I'd like to have support for both workflows if possible (I presume this is already possible, just would need some work, maybe with wrapper scripts).

Indeed. It should be possible. The native app can start the server and then open up the webview. Pass in a flag and only the server is started. Something like that.

Do you know of any apps (specifically SPAs) that use webview and/or a client/server architecture like this? I'd be interested to see how others get around the above problems (and any others that may exist) in real-world apps.

Not any off the top of my head, sorry. Maybe look through some of the packaging frameworks and see if you can find apps using them. Most of the big name apps use Electron, probably (as you mentioned) because support is much easier.

from uivonim.

 avatar commented on May 25, 2024 1

Interesting! I've been interested in webview as a potential electron alternative . . . I'm curious, did overall performance (not just memory usage, but also speed in general and such) improve after removing electron? I'm also guessing startup time was a lot better?

I don't recall dramatic performance improvements. At the end of the day it's still a web app in a web browser, so there are hard limits.

Having a client/server setup sounds really interesting, and if the electron dependency is removed then maybe Deno could be explored as a Node alternative (?) which might have increased performance (not certain Deno would improve perf, but I've heard that Deno is faster than Node; the ecosystem is nowhere near that of Node's though, so it might not be feasible to switch just yet). Did you push your proof-of-concept code anywhere where I could read through it? That would be a nice reference implementation from which to get an idea of how to go about implementing this.

I have not looked into the performance of Deno, but as I understand it Deno still runs on v8. There are other good reasons to use Deno, like a much nicer and richer standard library (bonus: builtin Typescript compiler). But either way, with the client/server model you could write the server in any language that has a WebSockets library (or WebRTC?).

I will have to search the archives for the proof-of-concept and see if I can get it working again. Will keep you updated.

I'm also curious, what was unsatisfactory about the WebRTC libraries you found?

I don't recall specifics, but I do recall just giving up because it was an insurmountable challenge. WebRTC itself is a lot more involved than just opening a socket to a server (ICE, TURN, STUN, etc.). It's possible there are better solutions today, especially if you are looking across different language implementations.

from uivonim.

 avatar commented on May 25, 2024 1

Hey, so I haven't followed everything but I have a question. @Breja you said that you separated UI thread from msgpack threads, IIUC. I see how this makes sense when there is work than can be done async.

But in this particular case, given that neovim is what defines the content of the UI, shouldn't neovim's work be considered crucial time-critical synchronous work? Because basically the UI can't draw anything before it has received neovim's response, and knowing that I feel like separating threads actually introduces more latency. I'd be happy to hear your comments on that.

Apologies, I probably did a terrible job of explaining the threading situation.

But you are right of course - there is no other useful work to do when drawing a frame so async would be a negative. In Veonim all neovim UI events (redraw/input) are processed in the UI thread (to be pedantic, the JS main thread) synchronously directly via stdout/stdin (at least that's what I remember - but the code will keep me honest πŸ˜„).

The real issue was implementing the language server client and later vscode extension host integration. If a user would open up a massive buffer we would get a massive message containing the full buffer contents which could easily take much longer than a single frame to parse. It was in this case that we spawned a separate background thread, made a separate connection to nvim via socket, and processed non-UI events. This way the UI would still be interactive while the background thread was busy parsing buffer text to send to language servers.

If Veonim would not have been written in Javascript or Neovim could speak JSON this multithreading business might not have been an issue. πŸ˜›

This business about the background thread for an alternate connection to neovim was what I was trying reconcile in previous messages. Uivonim doesn't need to implement a language server client/ext host client, it uses Neovim's builtin LSP client. Thus, that background thread needed for decoding large language-server related messages might not be needed and removing it could simplify the architecture a bit.

from uivonim.

romgrk avatar romgrk commented on May 25, 2024 1

I've done a bit of profiling. The case is one-keypress (j), so just the cursor moving down one line. The image below represents the profile taken, not including the keypress & neovim delay, but starting at the moment the first response from neovim is received. Here is my interpretation of it:

  • Most of the time is spent inside the onData handler, so IIUC, msgpack parsing isn't the issue. If you observe the thin pink slice at the start of the second event, that's a .superparse() stack. I imagine the other ones are so small they are not included.
  • There seems to be 4 events sent from neovim. The console.log I put says its: redraw, redraw, redraw+redraw, and uivonim-position+uivonim-autocmd.
  • Most of the time doesn't seem to be spent in msgpack decoding. If you look at the bottom of the stacks, api.refreshLayout() (teal) and updateCursorChar() (pink) seem to be quite expensive.
  • IIUC, there are 2 way to make gains here:
    1. Don't refresh the layout 4 times for 4 successive events. Put in place a small (0-10ms) timeout that does a render update (including api.refreshLayout(), updateCursorChar(), etc) after the timeout runs out. Each received event resets the timeout to zero. The browser won't be busy recalculating styles & layout and will probably be able to allocate that time to the event processing script, which will run faster. Once the four events are accumulated, you can mutate/render everything in a loop.
    2. Don't trash your layout! In the last event illustrated below, you see a big (67ms) getBoundingClientRect(). It's also the biggest contributor to api.refreshLayout() in the other calls. As far as I can see, it's calling the method on a window element. Solution: cache the bounding client rect value when creating the window/changing window sizes, it will be a lot more performant.

Screenshot from 2020-10-31 21-58-39

from uivonim.

romgrk avatar romgrk commented on May 25, 2024 1

Well it is...but to check that it hasn't changed you query the .getBoundingClientRect() method ^^

api.refreshLayout = () => {
const { top, left, width, height } = content.getBoundingClientRect()
const x = left
const y = top - titleSpecs.height
const same =
layout.x === x &&
layout.y === y &&
layout.width === width &&
layout.height === height
if (same) return

You should rather update it when the window is created or when it changes size (either when neovim tells you or the global winwow size changes). Right now you're querying it at every event it seems.

I've made a very quick patch to test that my assumptions are right and it seems to work. With this patch applied my latency for the same use case described above is cut in half. It's surprisingly fast. Still not terminal-level fast but much better. This is for single-keypress though, typing text in insert mode is still not ok. You'll see in the patch that I've throttled refreshWebGLGrid with the timeout optimization I described sooner, that function was taking taking a lot of time. Not sure what it does but I'm guessing it's a rendering function that can run after the events.

The important idea here is: do not touch the DOM while processing events. Batch everything for about 5-10ms after you haven't received any event. Any modification to the DOM during event-processing is triggering the browser into style+paint cycles. So all this needs to be done after that timeout: webgl rendering, cursor rendering, messageClearPromptsMaybeHack, etc.

Edit: I've just seen your new message above: haven't tested it but it seems to be the right thinking.

from uivonim.

romgrk avatar romgrk commented on May 25, 2024 1

Great! Do you want to open a PR for that (and any other changes we make)? We can refine things if necessary there.

No, the patch is a proof of concept, I really don't know the codebase and you'd need to implement that properly. Also I'm involved in too much stuff for my own good right now, I have too many PRs opened already ^^

from uivonim.

romgrk avatar romgrk commented on May 25, 2024 1

One other thing: in every profile I've done, I see the text content be updated slightly before the cursor position is. The fact that the cursor is rendered in a div above the text is bad IMHO. The browser needs to composite both layers together, and this is expensive work. You would probably be better off by rendering the cursor yourself on the canvas.

from uivonim.

 avatar commented on May 25, 2024

As you've probably seen there are two paths to talk to nvim:

  • stdin/stdout:
    • low level API: receive redraw events from nvim and send input events to nvim
    • runs in UI thread, needs optimizations to be performant [0]
  • socket connection via nvim listen address:
    • high level API: e.g. getting buffer list, text buffer events, etc.
    • runs in a separate web worker thread, does not impact render UI thread so not necessary to be as performant [1]

I had performance issues when I used off-the-shelf msgpack libraries - msgpack is too slow in Javascript. So I had to implement a custom msgpack decoder that would skip string allocations for grid_line events [0]. Once I had a msgpack implementation I figured it wasn't too much to add the RPC layer on top. I was also going thru this phase where I wanted to have as few node_modules as possible.

So to me it seems possible to replace the custom msgpack-rpc/neovim api implementation that is used for the higher level API calls. As for the lower level API that handles redraw events I see a couple options:

  • keep using the custom msgpack decoder for the lower level redraw events, but also bring in neovim node-client for the higher level API calls
  • try to use only node-client for both low level redraw events and high level API calls. If you benchmark the render code and the performance seems acceptable then the custom msgpack implementation might not be needed.

[0] I think the problem was that since each grid_line character is a separate string, it takes way too much time to allocate all those strings, plus the garbage collection to clean them all up. The idea was to keep the strings as raw bytes, and use the ascii/unicode byte numbers as an index to a font glyph atlas in WebGL.

// this is probably the most clever line in this module. deserializing

[1] There is a bit of indirection with the multiple instances feature (which I kinda regret the implementation now). Each instance runs in its own worker thread to keep necessary state localized with each nvim instance. The nvim API calls also originate from within this worker thread. Because msgpack is slow in Javascript, I thought having the neovim API calls in a separate thread would be useful for parsing large msgpack payloads in a way that do not impact the UI thread.

For example the buffers component:

[UI thread]
buffers.ts (get list of nvim buffers) -> instance-api.ts -> instance-manager.ts -> messaging/worker.ts

-- then

[web worker thread - worker/instance.ts (spawned by instance-manager.ts)]
messaging/worker-client.ts -> on request from UI thread: run handler -> call method in neovim/api.ts -> encode msgpack, send over socket to nvim

-- then
[nvim process]
receive msgpack request over socket -> do stuff

So at the very least you could replace neovim/api.ts with a different neovim API client. However you could maybe skip the web worker and keep a list of neovim client references on the main UI thread. If there are no expensive msgpack messages that need to be decoded then UI thread should be good enough.

If I were rewriting Veonim today I would not do 1 UI thread -> N instance threads. I would probably do multiple windows so then it's 1 Ui thread -> 1 instance thread. Actually if nothing is computationally expensive, the instance thread wouldn't even be needed. Just let the event loop in the UI thread schedule interactions with nvim - much simpler.

from uivonim.

smolck avatar smolck commented on May 25, 2024

@Breja Wow, thanks for the detailed comment!

One thing I'm wondering about is if webassembly could fix the performance issues with msgpack parsing? Not sure if conversions/allocations would need to take place to go between wasm and JS that would cause the same problem with string allocations, but theoretically wasm could speed things up quite a bit, I would think.

If I were rewriting Veonim today I would not do 1 UI thread -> N instance threads. I would probably do multiple windows so then it's 1 Ui thread -> 1 instance thread. Actually if nothing is computationally expensive, the instance thread wouldn't even be needed. Just let the event loop in the UI thread schedule interactions with nvim - much simpler.

Just to confirm, when you say multiple windows what do you mean? Multiple BrowserWindows?

Also, I'm curious what you had in mind for the multiple instances feature/why you created it? If it enables some really cool and unique features I'm all for keeping it and making it better, but I'm not sure what you had in mind there?

from uivonim.

 avatar commented on May 25, 2024

One thing I'm wondering about is if webassembly could fix the performance issues with msgpack parsing? Not sure if conversions/allocations would need to take place to go between wasm and JS that would cause the same problem with string allocations, but theoretically wasm could speed things up quite a bit, I would think.

Yes, WASM is fast, but moving/(de)serializing the data back and forth is the slow part. Maybe if you work with WebGL directly from WASM (well kinda) - the WebGL calls would still be through Javascript, but you could msgpack decode things and then put the data in a SharedArrayBuffer that is shared between WASM and the WebGL context. It might be worth trying out, at least for fun and learning. Although if you write too much WASM, you might as well write a native app. πŸ˜„

Just to confirm, when you say multiple windows what do you mean? Multiple BrowserWindows?

Yes

Also, I'm curious what you had in mind for the multiple instances feature/why you created it? If it enables some really cool and unique features I'm all for keeping it and making it better, but I'm not sure what you had in mind there?

Before I used Veonim I used tmux/nvim. The goal with Veonim was to replicate that workflow and not need a terminal outside of Veonim. I wanted to do everything in Veonim, much like Emacs πŸ˜„

So I copied the tmux sessions approach - where each logical project is in a completely separate enviornment. In Veonim this means separate neovim processes that can be switched between from the same UI window. I called this "instances" to avoid confusion with the existing vim sessions functionality.

A typical web development workflow might look like:

  1. start Veonim (an initial instance is created by default)
  2. cd to web-frontend-UI
  3. open buffers/tabpages/terminals/etc
  4. create new instance
  5. cd to web-backend-server
  6. open buffers/terminals/etc
  7. switch back and forth between the different instances with a fuzzy menu
  8. ???
  9. profit!

VSCode does something similar where you can open multiple app windows for different projects and use a fuzzy menu to switch between them. The difference is that in VSCode it switches between different app windows, whereas in Veonim it's one app window that can switch between different nvim processes.

In retrospect maybe having different app windows is superior both from a technical complexity standpoint and allowing the window manager to switch between different projects in a way that is more natural to the user (well if you have a good window manager anyway).

from uivonim.

smolck avatar smolck commented on May 25, 2024

Okay, that makes sense, thank you! I really like the idea of having a neovim tmux; the only real thing that I think needs to be added to make that a reality is some way of detaching/attaching to these sets of instances, like in tmux. Did you have any plans on adding something like that?

Overall, thank you so much for all of the detailed info! I think I have a bit of a better idea of how to go about refactoring things now πŸ˜ƒ

from uivonim.

 avatar commented on May 25, 2024

...the only real thing that I think needs to be added to make that a reality is some way of detaching/attaching to these sets of instances, like in tmux. Did you have any plans on adding something like that?

Are you referring to nvim_ui_attach/nvim_ui_detach? Or something else?

It's possible to run multiple neovim instances in parallel in one app window. Using vim-create.ts (which calls createVim in instance-manager.ts) will spawn a new neovim process and nvim_ui_attach to it in the current UI. Then using vim-switch.ts (which calls switchVim in instance-manager.ts) will bring up a menu which can be used to switch between the different active neovim processes. For the other instances in the background the neovim processes keep running.

Oh and there is also vim-rename to rename the name of the instance.

from uivonim.

smolck avatar smolck commented on May 25, 2024

Are you referring to nvim_ui_attach/nvim_ui_detach? Or something else?

No, I mean some way of quitting Veonim but leaving those instances running somehow, so that later you can spin up Veonim again and connect back to those instances, with everything the same as it was when you quit, like with tmux attach. Does that make sense?

from uivonim.

 avatar commented on May 25, 2024

No, I mean some way of quitting Veonim but leaving those instances running somehow, so that later you can spin up Veonim again and connect back to those instances, with everything the same as it was when you quit, like with tmux attach. Does that make sense?

Ah yes, I follow you now. Sorry for the misunderstanding.

Nothing of the sort exists in Veonim master (well besides some WebGL compatibility code). However in relation to remote features, I began some experimental code to decouple the app in a client/server model. The server would run in Node, with the Node runtime bundled in the binary somehow. The client would work in any modern web browser (with the default being a system webview) and communicate with the server via WebSockets. With this kind of setup it would be possible to close the UI and keep the server running - effectively being an attach/detach workflow. It would even be possible to connect multiple UIs to the server, or the UI have access to multiple servers (e.g. local + container, etc.) As a bonus it would remove the need for Electron, reducing memory usage (total 20-30MB on MacOS) and maybe binary size a little.

I got a proof-of-concept working. My main concern was that the WebSocket messaging would introduce more delay in the input -> render cycle. Things were mostly okay with WebSockets but there seemed to be some variance in the delay which I didn't feel great about. Using the built-in webview eval methods were slower and would not work in regular web browsers. I thought about using WebRTC Data Channels (maybe lower level than WS) but was not able to find a satisfactory WebRTC server library.

from uivonim.

smolck avatar smolck commented on May 25, 2024

Interesting! I've been interested in webview as a potential electron alternative . . . I'm curious, did overall performance (not just memory usage, but also speed in general and such) improve after removing electron? I'm also guessing startup time was a lot better?

Having a client/server setup sounds really interesting, and if the electron dependency is removed then maybe Deno could be explored as a Node alternative (?) which might have increased performance (not certain Deno would improve perf, but I've heard that Deno is faster than Node; the ecosystem is nowhere near that of Node's though, so it might not be feasible to switch just yet). Did you push your proof-of-concept code anywhere where I could read through it? That would be a nice reference implementation from which to get an idea of how to go about implementing this.

I'm also curious, what was unsatisfactory about the WebRTC libraries you found?

from uivonim.

smolck avatar smolck commented on May 25, 2024

Found the code and fixed it up to build and run (maybe)
https://github.com/breja/veonim_no_electron-v1
https://github.com/breja/veonim_no_electron-v2

Awesome, thank you so much! I really like the idea, especially being able to have the choice between a webview or just a quick browser tab as the client. Conceptually it also seems like it shouldn't be too much work to implement the server and move what needs to be moved to the frontend for rendering, as your proof-of-concepts show. I'm also curious how this would work on a mobile device/tablet with an external keyboard (not to be explicitly supported by this project necessarily, just something I was thinking about).

btw, the files under runtime/(darwin|linux|win32)/vvim are binary apps that allow you to type vvim some-file.txt in the app and have it open that buffer in the current window - this is a way to open up files from the terminal without nesting neovim. The way it works is it takes the process args and makes an http call to https://github.com/smolck/uivonim/blob/master/src/services/remote.ts.

There are a few problems with this:

  1. The binary files are opaque in the source tree (suspicious?). They were written in Nim, but I lost the source. If you care to keep the functionality, it may be better to replace them with shell scripts for each target operating system.
  2. There is probably a better way to "listen" for events coming from the shell script (named pipes maybe?). HTTP server is a little too much and may be a security risk.
  3. Sorry for the lousy implementation 😞

Interesting, thanks for the info, I didn't know that! Considering that sort of functionality can be achieved with neovim-remote, it can probably be removed once it's possible to pass CLI args through the executable to neovim; if someone wants that sort of functionality, they can then just use that plugin.

from uivonim.

smolck avatar smolck commented on May 25, 2024

There are a couple problems though I'm thinking about with a client/server architecture like this:

  • Compatibility. I don't want to have to support all sorts of browsers and browser versions. With electron, this is of course not an issue, since it's just a single browser under the hood. In practice, this might not be that big of an issue, but I'm not sure.
  • Packaging. You mention this in the READMEs of your proof-of-concept repos, and it's definitely something to consider. ATM, packaging is super straightforward with electron, and if a significant amount of friction is added after removing it that's definitely a downside. Of course, I don't expect an electron-less solution to be as easy to setup/package as the current setup, but I'd like it to be as straightforward as possible.

There's also the fact that some people don't want to use a client/server setup, but just want to be able to open an editor real quick and close it after they're done. Like emacs vs. emacsclient. I'd like to have support for both workflows if possible (I presume this is already possible, just would need some work, maybe with wrapper scripts).

Do you know of any apps (specifically SPAs) that use webview and/or a client/server architecture like this? I'd be interested to see how others get around the above problems (and any others that may exist) in real-world apps.

from uivonim.

romgrk avatar romgrk commented on May 25, 2024

Hey, so I haven't followed everything but I have a question. @Breja you said that you separated UI thread from msgpack threads, IIUC. I see how this makes sense when there is work than can be done async.

But in this particular case, given that neovim is what defines the content of the UI, shouldn't neovim's work be considered crucial time-critical synchronous work? Because basically the UI can't draw anything before it has received neovim's response, and knowing that I feel like separating threads actually introduces more latency. I'd be happy to hear your comments on that.

from uivonim.

romgrk avatar romgrk commented on May 25, 2024

Another question (sorry I don't know much the code and this might have already been said), do you have approximate latency times for each part of the code in the whole keypress->msgpack->neovim->msgpack->ui_update cycle? Or at least the whole cycle without ui_update?

One more: if you're so sure that msgpack is so slow, why was the code kept in javascript as opposed to separating it in a native C/C++/rust module?

I think uivonim would have much better chances against other "competitors" with lower latency, because other than that IMHO the design is nice and having web tech makes it way easier to develop fancy widgets.

from uivonim.

smolck avatar smolck commented on May 25, 2024

Another question (sorry I don't know much the code and this might have already been said), do you have approximate latency times for each part of the code in the whole keypress->msgpack->neovim->msgpack->ui_update cycle? Or at least the whole cycle without ui_update?

@romgrk breja might have more info, but here's the call stack from a quick profiling I did, the keydown event being from within insert mode:

callstack

Here's the summary from the same part:

summary

Not sure if any of this helps, but just thought I'd throw it in here.

from uivonim.

 avatar commented on May 25, 2024

One more: if you're so sure that msgpack is so slow, why was the code kept in javascript as opposed to separating it in a native C/C++/rust module?

It's not really msgpack itself that is slow, it's any decoding that is allocating strings in Javascript land. JSON.parse() is faster than any decoding written in Javascript because not only is it written natively, but it can also manage the lifetimes of the allocated strings for Javascript. But even that is not the fastest. The fastest is to not do any decoding at all, which is what Veonim does (sorta). It decodes some of the redraw event metadata, but allocating any strings for grid_line is skipped. We take the raw character bytes and send them to WebGL.

Why not write in a native module? Because at some point you still have to decode stuff back into Javascript strings and that's not free. Even VSCode tried to use C++ for the text buffer implementation but found that moving strings back and forth between C++ and Javascript did not make up for worthwhile savings. https://code.visualstudio.com/blogs/2018/03/23/text-buffer-reimplementation

In Veonim as I recall it took about ~1ms of Javascript execution time for render calls. The rest of the time is waiting for neovim, OS pipes (stdout/stdin), WebGL, browser compositing, window compositing, etc. I'm not sure it's possible to go much faster than that except to escape the browser.

from uivonim.

smolck avatar smolck commented on May 25, 2024

@romgrk Wow, thank you for the detailed information! I have one question though:

Don't trash your layout! In the last event illustrated below, you see a big (67ms) getBoundingClientRect(). It's also the biggest contributor to api.refreshLayout() in the other calls. As far as I can see, it's calling the method on a window element. Solution: cache the bounding client rect value when creating the window/changing window sizes, it will be a lot more performant.

Isn't the layout already cached?

const layout = { x: 0, y: 0, width: 0, height: 0 }

from uivonim.

smolck avatar smolck commented on May 25, 2024

Don't refresh the layout 4 times for 4 successive events. Put in place a small (0-10ms) timeout that does a render update (including api.refreshLayout(), updateCursorChar(), etc) after the timeout runs out. Each received event resets the timeout to zero. The browser won't be busy recalculating styles & layout and will probably be able to allocate that time to the event processing script, which will run faster. Once the four events are accumulated, you can mutate/render everything in a loop.

@romgrk Is this the sort of thing you were thinking?

diff
diff --git a/src/render/redraw.ts b/src/render/redraw.ts
index 58fcc3b9..68ac0f93 100644
--- a/src/render/redraw.ts
+++ b/src/render/redraw.ts
@@ -307,6 +307,20 @@ const win_float_pos = (e: any) => {
   }
 }
 
+let layoutTimeout
+
+const refreshOrStartLayoutTimer = (winUpdates: boolean) => {
+  layoutTimeout = setTimeout(() => {
+    state_cursorVisible ? showCursor() : hideCursor()
+    if (state_cursorVisible) updateCursorChar()
+    dispatch.pub('redraw')
+    if (!winUpdates) return
+
+    windows.disposeInvalidWindows()
+    windows.layout()
+  }, 10)
+}
+
 onRedraw((redrawEvents) => {
   // because of circular logic/infinite loop. cmdline_show updates UI, UI makes
   // a change in the cmdline, nvim sends redraw again. we cut that stuff out
@@ -317,6 +331,7 @@ onRedraw((redrawEvents) => {
   const messageEvents: any = []
 
   const eventCount = redrawEvents.length
+
   for (let ix = 0; ix < eventCount; ix++) {
     const ev = redrawEvents[ix]
     const e = ev[0]
@@ -373,12 +388,10 @@ onRedraw((redrawEvents) => {
 
   renderEvents.messageClearPromptsMaybeHack(state_cursorVisible)
 
-  requestAnimationFrame(() => {
-    state_cursorVisible ? showCursor() : hideCursor()
-    if (state_cursorVisible) updateCursorChar()
-    dispatch.pub('redraw')
-    if (!winUpdates) return
-    windows.disposeInvalidWindows()
-    windows.layout()
-  })
+  if (!layoutTimeout) refreshOrStartLayoutTimer(winUpdates)
+  else (clearTimeout(layoutTimeout), refreshOrStartLayoutTimer(winUpdates))
+
+  // requestAnimationFrame(() => {
+
+  // })
 })

from uivonim.

smolck avatar smolck commented on May 25, 2024

I've made a very quick patch to test that my assumptions are right and it seems to work. With this patch applied my latency for the same use case described above is cut in half. It's surprisingly fast. Still not terminal-level fast but much better.

Great! Do you want to open a PR for that (and any other changes we make)? We can refine things if necessary there.

The important idea here is: do not touch the DOM while processing events. Batch everything for about 5-10ms after you haven't received any event. Any modification to the DOM during event-processing is triggering the browser into style+paint cycles. So all this needs to be done after that timeout: webgl rendering, cursor rendering, messageClearPromptsMaybeHack, etc.

Okay, thank you for the info!

from uivonim.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.