Git Product home page Git Product logo

revery-ui / revery Goto Github PK

View Code? Open in Web Editor NEW
8.1K 80.0 196.0 6.9 MB

:zap: Native, high-performance, cross-platform desktop apps - built with Reason!

Home Page: https://www.outrunlabs.com/revery/

License: MIT License

OCaml 3.62% C++ 5.82% JavaScript 1.42% Shell 0.14% C 34.01% Reason 54.68% Objective-C 0.15% Dockerfile 0.16%
reason reasonml ocaml react electron ui desktop app native cross-platform

revery's Introduction

Logo

Build native, high-performance, cross-platform desktop apps with reason!

Build Status npm version Join the chat on discord! Backers


Slider components

๐Ÿšง NOTE: Revery is a work-in-progress and in active development! ๐Ÿšง

To get a taste of Revery, check out our JavaScript + WebGL build on the playground. For the best experience, though, you'll want to try a native build.

Motivation

Today, Electron is one of the most popular tools for building desktop apps - using an HTML, JS, CSS stack. However, it has a heavy footprint in terms of both RAM and CPU - essentially packing an entire browser into the app. Even with that tradeoff, it has a lot of great aspects - it's the quickest way to build a cross-platform app & it provides a great development experience - as can be testified by its usage in popular apps like VSCode, Discord, and Slack.

Revery is kind of like super-fast, native code Electron - with bundled React-like/Redux-like libraries and a fast build system - all ready to go!

Revery is built with reasonml, which is a javascript-like syntax on top of OCaml This means that the language is accessible to JS developers.

Your apps are compiled to native code with the Reason / OCaml toolchain - with instant startup and performance comparable to native C code. Revery features platform-accelerated, GPU-accelerated rendering. The compiler itself is fast, too!

Revery is an experiment - can we provide a great developer experience and help teams be productive, without making sacrifices on performance?

Design Decisions

  • Consistent cross-platform behavior

A major value prop of Electron is that you can build for all platforms at once. You have great confidence as a developer that your app will look and work the same across different platforms. Revery is the same - aside from platform-specific behavior, if your app looks or behaves differently on another platform, that's a bug! As a consequence, Revery is like flutter in that it does not use native widgets. This means more work for us, but also that we have more predictable functionality cross-platform!

NOTE: If you're looking for something that does leverage native widgets, check out briskml. Another alternative is the cuite OCaml binding for Qt.

  • High performance

Performance should be at the forefront, and not a compromise - we need to develop and build benchmarks that help ensure top-notch performance and start-up time.

  • Type-safe, functional code

We might have some dirty mutable objects for performance - but our high-level API should be purely functional. You should be able to follow the React model of modelling your UI as a pure function of application state -> UI.

Getting Started

Contributing

We'd love your help, and welcome PRs and contributions.

Some ideas for getting started:

License

Revery is provided under the MIT License.

Revery bundles several dependencies under their own license terms - please refer to ThirdPartyLicenses.txt.

Contributors

Thanks to everyone who has contributed to Revery!

Backers

Thank you to all our backers! ๐Ÿ™ [Become a backer]

Built with Revery

Onivim 2

Special Thanks

revery would not be possible without a bunch of cool tech:

revery was inspired by some awesome projects:

Hot reload

We don't have a Hot Reload yet but it is on our roadmap. In the meantime, you can check branch feat/hot-reload to see the progression.

In the meantime @mbernat has done a script that allow to relaunch the APP when the binary changed.

revery's People

Contributors

akinsho avatar bryphe avatar crossr avatar czystyl avatar despairblue avatar eduardorfs avatar ericluap avatar et7f3 avatar faisalil avatar glennsl avatar jasoons avatar jchavarri avatar joprice avatar kitten avatar lessp avatar mozmorris avatar msvbg avatar nik72619c avatar nikgraf avatar ohadrau avatar parkerziegler avatar paulshen avatar romgrk avatar samatar26 avatar tatchi avatar tcoopman avatar ulrikstrid avatar whoatedacake avatar xixixao avatar zbaylin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

revery's Issues

UI Infrastructure: Hit testing (pre-req for mouse input)

Once we have mouse support, we need a way to detect when the cursor is over an element, for the purposes of implementing the following events:

  • onMouseOver
  • onMouseOut
  • clicks

This is critical for the button in #37

Specifically, we should add a hitTest method to our Node hierarchy: https://github.com/bryphe/revery/blob/master/src/UI/Node.re

This would take a point (x, y) and return a boolean of true if the point is inside the node, false otherwise.

An alternative implementation would be to have each node expose a bounding rectangle or bounding geometry that is then compared - but this is less flexible. With each node implementing a hitTest method, there can be arbitrary geometry support, and transforms are easily supported (ie, for a box that has a rotation applied - we can apply the inverse transformation to the hit test point, and do a simple check if the point is in the box).

Font - Default Font Family

We force the user at the moment to pick a font via fontFamily. If a fontFamily is not specified, you get a crash with this exception:

- Loading font:
Fatal error: exception Fontkit.FontKitLoadFaceException("[ERROR]: Unable to load font at FT_New_Face\n")

This is pretty rough - we should pick out a default font family if none is specified. I think that the Roboto-Regular.ttf that we already bundle, and use in several examples, would be an OK default.

Does anyone have any thoughts / pereferences on this?

Bug: Border does not render if background is transparent

Issue: If a border is specified, but no background, the border does not render.

For example, for this component:

module Test = (
  val component((render, ~children, ()) =>
        render(
          () => {
            let borderStyle =
              Style.make(
                ~border=Style.Border.make(~width=10, ~color=Colors.white, ()),
                (),
              );

            let innerStyle =
                Style.make(
                    ~backgroundColor=Colors.red,
                    ~width=100,
                    ~height=100,
                    (),
                );

              <view style=borderStyle>
                  <view style=innerStyle />
              </view>
          },
          ~children,
        )
      )
);

I would expect to see this:
image

But get:
image

I actually hit this in #138 , which is why I added a pretty transparent background here:
https://github.com/bryphe/revery/blob/d05c343e67c90b81fdc19da74fbf59b5e6a87536/examples/Bin.re#L113

The issue seems to be this check: https://github.com/bryphe/revery/blob/d05c343e67c90b81fdc19da74fbf59b5e6a87536/src/UI/ViewNode.re#L241

UI Styles: Implement 'border' styling

To have a useful UI - we should have parity with what the browser supports for the border styles: https://developer.mozilla.org/en-US/docs/Web/CSS/border

We should decide on our API surface. Perhaps something like this?

  • 2-pixel red border around entire element: <view style={Style.make(~border=Border(Colors.red, 2), ())} />
  • 1-pixel blue border on the left and right: <view style={Style.make(~borderHorizontal=Border(Colors.blue, 1)} />

For the initial implementation, only implementing the solid style seems reasonable. We can always add other styles down the road - supporting the border-width and border-color are important, though!

There are a few things we'll have to do to support this

Part 1: Style properties

Part 2: Rendering quads

For the core border rendering, we need to render a quad that spans the width (in the case of border-top/border-bottom) or the height (in the case of border-left or border-right), and positioned correctly in relationship to the node's layout.

This logic can live in ViewNode, and it's quite similiar to the logic we have today for drawing the background - the only difference really is the color and dimensions of the quad.

https://github.com/bryphe/revery/blob/711a90a0b19af2f39dd984cb1570a1231e745365/src/UI/ViewNode.re#L22

Part 3: Rendering Junctions

The trickiest piece of this work that will need to be handled with the rendering is where borders meet up. For example, if I have a top-border and a left-border, we need to render triangles potentially where these meet. This is dependent on us having a triangle primitive in #120

For example, if I had a div like this:

<div style="border-left: 10px solid red; border-top: 20px solid yellow; width: 100px; height:100px;">

It would look something like this:
image

Once we get Part 3 completed, though, we'll have a pretty usable border story ๐Ÿ’ฏ

Example: Calculator

A calculator would be a great example for using the Button in #37 , and perhaps some simple custom components.

Catch multiple strings in text at compile time

It seems I can't have multiple strings in a single <text> component like this:

<text style=textInputStyle> {state.comment} {"|"} </text>

This gives me the following runtime error:
Fatal error: exception Fontkit.FontKitLoadFaceException("[ERROR]: Unable to load font at FT_New_Face\n")

Would it be possible to catch with types?

Examples: Publish our WebGL examples

Now that we have text rendering in WebGL (thanks @jchavarri !), I'm thinking it makes sense to publish our WebGL examples to a website - either GH pages or using netlify or something. It'd be a really easy way for people to try things out without needing to set up to start.

API - Colors: Implement full set of colors available from HTML

We have a very small subset of Colors defined in our Colors module - like Colors.red, Colors.blue, etc.

It'd be great to have parity with the default set in HTML - I think this is a comprehensive list here:
https://www.w3schools.com/colors/colors_names.asp

Might be easier with #102 tackled - but it would be awesome to have this full set of Colors in revery, too!

Good place to start looking:
https://github.com/bryphe/revery/blob/master/src/Core/Colors.re

Text Rendering: Artifacts when rendering large textures

I was experimenting with high-dpi rendering / trying to simulate locally, and I saw artifacts pop up when rendering large sized textures:

image

The bug here is we are using GL_REPEAT for GL_TEXTURE_WRAP_S/GL_TEXTURE_WRAP_T. Need to expose GL_CLAMP_TO_EDGE and use that instead.

state of js compilation

The only notes I see concerning web support are an unchecked box for supporting it as a platform, so I'm assuming this is a known issue, but I'm curious what the state of it is. In the build, I see the script 'build:js'. I ran it, then started an http server in _build/default/examples and opened index.html. I got the following error:

Uncaught TypeError: runtime.caml_glfwDefaultWindowHints is not a function

I checked _build/default/examples/Bin.bc.js, and it seems to be defined.

I'm personally interested in using js_of_ocaml for full stack dev, so very curious about the root cause here and steps to debug and solve.

Documentation: Add documentation for creating a custom component

We currently support custom components (with hooks even!) - some examples in the code:
https://github.com/bryphe/revery/blob/711a90a0b19af2f39dd984cb1570a1231e745365/examples/Bin.re#L50
https://github.com/bryphe/revery/blob/711a90a0b19af2f39dd984cb1570a1231e745365/examples/Bin.re#L6

However, our documentation just says TODO. We should put a simple example there and detail the 'anatomy' of a custom component, so that it is more accessible.

Automated Testing: Image Snapshot Tests

Thinking about #145 - for some of these very visual cases, we have no current test coverage. It's important to be able to make changes safely and confidently - so I always think when there is a regression - how can we improve our 'safety net' to catch these?

We're getting to a level of features with background color, text rendering, borders, shadows that it becomes tough to validate all of these in a PR change!

What I'd like to add to our infrastructure is a set of image-based verification tests, that can validate some of these basic scenarios. These would render a simple scene or component, save it as an image, and then compare that image to a snapshot.

This isn't a new idea; tools like Telerik have supported it for a while. Doing googling shows a RosettaCode problem for an algorithm for this ๐Ÿ˜„

The challenge with such tests is making sure they are reliable and easy-to-update. For reliability, it often helps to have a threshold (% of pixels with the same value), or use per-platform snapshots (there might be differences in anti-aliasing, for example). These test suites should be pretty limited and focused on the core set of rendering primitives we have, because they have a maintenance cost. But they can help protect us against regressions.

Open questions:

  • Is there a library we could use for image-verification tests? Something that could do a bitmap comparison and ideally show some sort of diff (ie, an overlay that shows different pixels in red).
  • Which scenarios should we cover?
  • When an image-verification test fails, how do we make it easy to act on and know what failed?
  • What additional APIs do we need to take a screenshot of the GLFW window, to compare against the snapshot? It looks like the glReadPixels API mentioned in this StackOverflow Post could help.

Alternatives:
One alternative to image-based snapshot testing is OpenGL API snapshot testing - essentially, put a proxy in place for all the glXXX calls, that record the inputs. This can ensure we end up with the same set of GL calls. This is more robust then the image-verification approach, but it also is a much higher maintenance cost - any internal refactoring performance improvements that would've passed the image verification test would also flag as a failure for these tests. So I'd lean towards the image verification test, for now.

API: Cursor style property

Once https://github.com/bryphe/reason-glfw/issues/66 is implemented, we'll have the ability to change the cursor. This is important to give users the UX they expect - a way to show when an element is clickable when you hover over it, an I-beam to show the user that text input is available, etc.

When we have those APIs, we'll have to decide on a way to expose them in revery.

I think the most intuitive approach, for users coming from React in the browser, would be to have a cursor style property: https://developer.mozilla.org/en-US/docs/Web/CSS/cursor

We could leverage our mouse tracking + hit testing to figure out the 'cursor' style of the node the mouse is over, and decide how to call the glfwSetCursor API, based on that.

API - Color: Implement hex parsing

We have a small Color API to parse / work with colors - it's very bare-bones at the moment, though!

One important feature we need for it is creating colors from hex-values. Some examples of cases we should be able to parse:

  • #FFF (3 element, rgb)
  • #FFFA (4 element, rgba)
  • #01010F (6 element, rgb)
  • #0101CC0F (8 element, rgb)

It'd be fine to use a library for this (I'm sure there is one existing), but the main thing is just that we can parse these hex colors and get the right output.

An API like Color.hex("#FFF") that returns a Color.t would be perfect. We should have tests covering this case, as well.

Load assets relative to executable path

REPRO:

  • Run esy x Bin.exe

Expected: App should launch.
Actual: Fatal error: exception Fontkit.FontKitLoadFaceException("[ERROR]: Unable to load font at FT_New_Face\n")

The issue is that when we load fonts, or assets like images, we load them from the current working directory. This is problematic, because the user should be able to launch the executable from anywhere.

Because of this limitation - we have to include this awkward instruction in our README.md:

After you build, the executables will be available in the _build\install\default\bin folder.

NOTE: Currently the executables must be run from install\default\bin, since the assets are there.

However, the user should just be able to run via esy x Bin.exe to try out the example app.

Streamlining this would make the first-run experience much smoother, and also unblock #136 .

Some things we need to do:

  • Have an API to get the executable's directory. I have an API like this I was working on in rench: revery-ui/rench#15 - but we could just port over the relevant code, too:
let getExecutingDirectory = () => {
    Filename.dirname(Sys.argv[0]);
};
  • Update our font loading and asset loading path to use the executingDirectory instead of the current working directory. We'd probably want to handle this in our TextNode and ImageNode classes, or we could handle it lower in the stack (ImageRenderer, FontCache). Append the executing directory + requested asset.

Considerations:

  • We need to make sure this works fine in the JS strategy, too! I think all the logic should be the same if getExecutingDirectory simply returns / (the root) in the JSOO environment.

In the future, we might want to make our asset loading more flexible - some scenarios we'll potentially need to address:

  • Looking for font files in the system font directory
  • Allow custom paths (perhaps an app wants to put all its assets in a different place)

UI Styles: Implement 'box-shadow'

For box-shadow, we'll want to add a Style property, something like:

Style.make(~boxShadow=BoxShadow(-5., -5., 10., 10., Color.rgba(0., 0., 0., 0.5), ..., ())

The box-shadow properties would mirror the properties from CSS: https://developer.mozilla.org/en-US/docs/Web/CSS/box-shadow

(in order)

  • xOffset - the horizontal offset of the shadow
  • yOffset - the vertical offset of the shadow
  • blur radius
  • spread radius
  • color

Note that the box shadow won't impact Layout, so we don't need to worry about passing it to layout (like we did for flex properties!)

The way I would think about tackling this is splitting it up into a few parts:

Part 1: Set up the types

For part 1 - I'd just look at adding the types in Style.re, and plumbing them through.

We'd need to:

  • Add a BoxShadow type
  • Add a boxShadow property to the Style record / make fucntions

Part 2: Render a solid shadow

For part 2, I'd skip the blur / spread to simplify things - we can just render a quad the same size as the view node, but offset based on the values of xOffset and yOffset.

The place we'd want to look at drawing this is in the ViewNode - this is the code that draws the background of a node:
https://github.com/bryphe/revery/blob/4a52028176aad32f81973a3a0bbc651299359dda/src/UI/ViewNode.re#L48

Prior to drawing that - we'd want to draw the 'shadow' quad. Right now, our rendering looks like this:

  • Set uWorld for background quad - this is the transform matrix
  • Set uColor for background quad - this sets the color in the shader

With our shadow, we'd want to do this in two passes:

  • Render shadow
    • Set uWorld for shadow - this is like the transform for the background quad, but with an extra translation to set the xOffset/yOffset.
    • Set uColor for the shadow - this would be based off the shadow color
    • Draw via Geometry.draw(_quad, solidShader)
  • Render background
    • Set uWorld for background (same as today)
    • Set uColor for background (same as today)
    • Draw via Geometry.draw(_quad, solidShader)

So our shadow would add an extra draw call

Part 3: Render the blurred edges

For part 3, we'd take into account the spread and blurRadius.

I think the easiest way to handle this (at least in a quick way) would be to add additional quads for the edges, and render those edges with a shader that handles the gradient. At the 'core' of our shadow, the opacity would be 1.0 x the shadow color. At the edge of the shadow, it'll fall off to 0 - becoming more transparent as it spreads out. We could use additional quads around the core shadow, along with a shader that models this fall-off to get a smooth gradient.

There might be better ways to handle this, too.

Text Rendering / Clarity: Investigate subpixel rendering strategies

Issue: On Low DPI displays, the text rendering is not as clear as it could be.

One common technique for dealing with this is subpixel rendering, which exploits the fact that LGB displays tend to have strips in RGB order.

It's a strategy for increasing the resolution of font rendering by taking advantage of this pixel geometry.

@cryza did some amazing work in Oni to set up a full-WebGL based subpixel rendering strategy: onivim/oni#2120

The idea, as I understand it, was to render the same glyph 4 times (for each subpixel offset case), and then pick the appropriate one based on the pixel offset of the actual glyph. It would be great to have a similar strategy here, in combination with #108 - we'd have some really sharp font rendering ๐Ÿ˜„

Example: Clock

A 'wall clock' would be a great example for showcasing the transform functionality in #40 - the rotations could be used to implement the hour / minute / second hands. In addition, it could be packaged as a custom component, so that there'd be a clear cut example of a stateful custom component.

Example: To-do list

A to-do list is the canonical example for UI frameworks, so we should have one here too.

There's a few things we need first:

  • Checkbox control
  • Mouse input

Unable to build on Linux: esy-harfbuzz requires ragel

I had all the other dependencies, but I was unaware I needed ragel until I saw that harfbuzz failed to build. I'm not sure where the best place would be to document this, any guidance so I can submit a PR? Thanks!

Framebuffer and window size aren't matching

This was reported by @jordwalke - I believe it was on a Macbook Pro. On initial run of the examples, the framebuffer isn't matching the window size:

https://media.discordapp.net/attachments/452388437635366916/514025866305601537/Screen_Shot_2018-11-19_at_2.33.20_AM.png

The autocomplete should be taking up the entire space of the window (there should be no 'cornflower blue').

After resize, the rendering correctly fills the window. This suggests there is a mismatch between how we are determining the framebuffer size on initial creation and on resize.

Cannot run examples on MacOS

I can build, but I can't run the examples

jwalkes-MacBook:revery jwalke$  _build/install/default/bin/Autocomplete
- Loading font: Roboto-Regular.ttf
Fatal error: exception Fontkit.FontKitLoadFaceException("[ERROR]: Unable to load font at FT_New_Face\n")

Bug: Click events coordinates not happening where they should in retina screens

There seems to be an issue where the events don't propagate to the layer that one would expect.

It might be related to the coordinates and pixelRatio conversion in retina screens.

In the gif below, I'm clicking on the Click Me button which should increase the counter, but the events end up reaching the logo image:

click-retina

@bryphe It might be an issue upstream in glfw, I wasn't sure so for now opening it here where the Bin example is ๐Ÿ™‚

Animation: Implement 'Hooks.transition' hook

Today, we have a Hooks.animation hook here:
https://github.com/revery-ui/revery/blob/master/src/UI_Hooks/Revery_UI_Hooks.re and https://github.com/revery-ui/revery/blob/master/src/UI_Hooks/Animation.re

That is used as follows:

      let (rotationY, pauseRotationY, restartRotationY, hooks) =
        Hooks.animation(
          Animated.floatValue(0.),
          Animated.options(
            ~toValue=6.28,
            ~duration=Seconds(4.),
            ~delay=Seconds(0.5),
            ~repeat=true,
            (),
          ),
          hooks,
        );

(from https://github.com/revery-ui/revery/blob/master/examples/Hello.re)

I think it'd be convenient to have a Hooks.transition hook, that would work as follows:

let currentValue = Hooks.transition(1.0, { duration: Seconds(1) });

The idea is that you could use this along with some other events, for example:

let (opacity, setOpacity) = Hooks.state(1.0);

let transitionedOpacity = Hooks.transition(opacity, { duration: Seconds(1) });

let onMouseDown  = () => setOpacity(0.5);
let onMouseUp = () => setOpacity(1.0);

<view style={Style.make(~opacity=transitionedOpacity, ())} ... />

This would enable a smooth transition between the opacity values, as opposed to just directly switching from 0.5 <-> 1.0. The Hooks.transition hook could leverage Hooks.state under the hood to keep track of the last value. If the last value is different, it could start an animation and use the animated value. Otherwise, it could just return the current value.

UI Infrastructure: Event Bubbling

At the current time, we have a very simple event model for handling mouse events.

That logic is in https://github.com/revery-ui/revery/blob/master/src/UI/Mouse.re , specifically here:

    let isNodeImpacted = n => n#hitTest(pos);
    let nodes: ref(list(Node.node('a))) = ref([]);
    let collect = n =>
      if (isNodeImpacted(n)) {
        nodes := List.append(nodes^, [n]);
      };
    Node.iter(collect, node);
    List.iter(n => n#handleEvent(eventToSend), nodes^);

This is very simple - we check for all the nodes that pass the 'hit-test', and dispatch the event to all of them.

However, this is unexpected behavior and not intuitive if you're coming from web programming - as webdevs, we'd expect the event to dispatch to the top-most element, and bubble up from there!

We need to implement this event-bubbling behavior in revery. This will be useful not just for the initial mouse events, but for all sorts of other events - like keyboard input, etc.

Proposal

We add a UiEvents module that has a method bubble = (node, event). The bubble would do a few things:

  • It would wrap the event with some extra methods, like stopPropagation or preventDefault - like we'd expect in Web events.
  • It would keep some internal state of whether propagation was stopped or default was prevented (ie, via refs).
  • It would call handleEvent for each node in the hierarchy. If stopPropagation is called, it should discontinue the traversal up the hierarchy.

Testability

We should be able to craft unit tests that exercise this with some simple Node objects that have handlers that call stopPropagation, along with counters that validate whether the events were hit.

Application

We can hook this up to the mouse event bubbling behavior today, by picking out the top-most node that passes the hit-test. We need to make sure our z-index tracking is working correctly for this.

Text elements do not render correct background colors

revery-text-bug

Setting a background for text elements does not set the color in the actual character's cell to match the overall background color leading to a patchy appearance, not entirely sure how to go about looking into this @bryphe, as the meat of how revery works aka shaders etc. is very new to me but could have a look if you point in me the right direction

Performance - Text Rendering: Use texture atlas for characters

Rendering text is currently very expensive in Revery, because it involves lots of context-switches to jump between shaders (this is made even worse by the fact that we currently regenerate textures every frame - but that's a separate issue).

The text rendering could be significantly improved by having a texture atlas that contains all the rendered glyphs - then, we could render a line of text in a single pass (or at least, a more minimal set of passes), as opposed to the situation today - where we always render each quad / texture in a single pass.

There's an excellent TextureAtlas implementation by @cryza in Oni here: https://github.com/onivim/oni/blob/master/browser/src/Renderer/WebGLRenderer/TextRenderer/GlyphAtlas/GlyphAtlas.ts that could be useful here ๐Ÿ‘

Performance - Advanced: ContainerNode that uses a render texture

Often, for UI widgets / controls, they are relatively static - and don't require a re-render very frequently.

For those cases, it doesn't really make sense for us to re-render the entire widget every frame. This is costly and involves lots of transforms. If it doesn't get updated very often, it makes sense to render the widget to a render target (a texture). When it's cached - we'd just render a quad + that rendered texture.

Proposal: We'd add a <container>...</container> tag that does this caching (it would correspond a ContainerNode that handle this).

The downside is that, if a widget is rendered frequently, it ends up being more expensive to render to the texture, plus use that texture to render a quad.

I thought about if it would be possible to automate this, but I think the application developer has the right 'domain' to know about this pattern. It would never be functionally necessary to use this <container />

In addition to the ContainerNode - itself, we'd have to add some dirty-tracking the container node would need to know how to check its children to see if invalidation of the cached texture is necessary.

It's similiar to how in CSS developers would use translate3d(0, 0, 0) to force layer promotion (ie, https://aerotwist.com/blog/on-translate3d-and-layer-creation-hacks/) - it's a similar concept - this would just be more explicit.

Documentation: Integrate odoc to generate documentation

odoc is a great OCaml-community-supported project for generating documentation - and it even integrates with esy. It'd be awesome if we could use it to generate some initial documentation.

I imagine we'll need to do a better job of documenting in the files (using the proper code-block comments to get good descriptions in the documentations) - it'd be helpful to know what we need to do there to get high-quality docs.

OPAM package

I'd like to publish an OPAM package for revery to make sure it's usable with the broader OCaml community (which would primarily consume this library via OPAM).

I've only tested it with esy currently, but it would be great to publish and verify this through OPAM.

Examples: Consolidate to a single example

Now that we have Clickable and <Button /> in #152 , I was thinking it'd make sense to consolidate to a single example project. We could still keep each sample in its own Module - but have some sort of navigation story to go between them.

For example, we could have <Button /> for each example on a pane, and then render the selected example in the remaining space.

It'd be nice to just be able to run esy x Examples.exe and be able to quickly navigate between them!

Compile-time asset loading

This is ported from ideas @jchavarri and @OhadRau mentioned in PR #153 - some really neat ideas around compile-time asset loading. (Not my idea so I don't want to take credit for it ๐Ÿ˜„ ) Brought over some notes from that PR:

From @jchavarri :

I'd love to play around ideas around handling compile-time known assets paths at build time. I'm still unsure how that would look like exactly, but my idea right now is to read these paths at compile time, maybe through a ppx, read the assets from the ppx binary, and convert the load expression into an assignment to a binding of the whole path binary data (maybe as a string? or as binary data?). Something like https://github.com/johnwhitington/ppx_blob.

From @OhadRau :

@jchavarri If you're interested in compile time asset loading, that's some thing I was gonna try to make actually. I've thought of a few ways of doing it and once I settle on one I'll go ahead and write a PPX for it:

  • Automatically guessing which files are statically accessible (if the path given to whatever File.load is a string literal && the file exists)
  • Having a [@static] annotation that can be used when loading a file to inform the compiler that it's a static asset
  • We can output each file as a linkable object file (e.g. https://www.linuxjournal.com/content/embedding-file-executable-aka-hello-world-version-5967) and generate extern declarations
  • We could also make an array or hashtable of strings (binary or otherwise) that we load, basically the same as what ppx_blob would do
  • For each of these, we have to let File.load know that these files are already loaded. We can do this by using a hashtable/other data structure of cached files and then inserting that as an optional parameter/overriding an empty data structure when we've loaded files

Text Rendering / Clarity: Implement correct gamma correction

Issue: The font rendering is not as clear as it could be on low-dpi displays.

One contributing factor is that we are not appropriately handling gamma color space. When we render a glyph, freetype gives us back an alpha mask - each pixel is an 8bit value describing the coverage. If a pixel is 50% covered, it is made 50% black.

However, that 50% does not actually translate to 50% brightness - we treat '128' as the halfway point (which it is in linear space), however, in actuality, ~'186' is the halfway point for brightness.

This is described in more detail here:

One open question is - does this mean we can't render text with a transparent background? It might be that, for subpixel rendering and for this, we'd need to render text with a solid background (or a known color / map in the background).

Following from the above document, there might be a way to gamma-correct properly w/o knowing the background: https://bel.fi/alankila/lcd/

The goal of this work would be to implement proper gamma-correction - ideally preserving transparent backgrounds for the text, if possible!

UI Infrastructure: Focus Management

Managing focus for text input is critical for useful interactive applications. In general, clicking on a text input should grant it focus. In addition, other elements may be 'focusable' for accessibility, like buttons.

Revery should provide an intuitive, React-like interface for working with focus, that is familiar for web developers using React.

Focus is an inherently stateful concept - for a basic scenario, we can keep track of focus at the node level. Our 'focus manager' could essentially keep track of the focused node via a ref.

Proposal

Internally, we need to:

  • Add a Focus module that keeps track of the actively focused node. When that actively focused node changes, it should dispatch focus and blur events to the respective nodes.

For our Nodes, we need to:

  • Implement .focus() and .blur() methods. These would be available via the ref introduced in #139

On our JSX side, we need to:

  • Add a tabindex field which, for now, is simply a proxy for whether or not it is focusable. Later, when there is more focus on keyboard accessibility, we can extend this to behave as the browser (ie, for tab-key flow)
  • Add an onFocus and onBlur event for nodes. This should be added to our NodeEvents module and dispatched at the proper time.

This lays the groundwork for a simple focus management story. Once we have this in place, we can start 'funneling' the keyboard input to the focused element. Key for implementing apps that need forms!

Testability

  • Calling .focus on a node without tabindex should not change focus
  • Calling .blur on a node with tabindex should cause no node to have focus
  • Calling .focus on a node with tabindex set should change focus
  • When focus changes, the appropriate events are fired (onBlur is triggered for the previous element, and onFocus is triggered for the new element).

Rotation doesn't update for <view>s

Whenever a element is rotated, it keeps its initial position for the remainder of the runtime of the program. Can't tell if this is something on my end or if transforms just haven't been finished for elements.

Really minimal example:

open Revery;
open Revery.Core;
open Revery.UI;

let init = app => {

  let w = App.createWindow(app, "test");

  let ui = UI.create(w);

  let textHeaderStyle = Style.make(~backgroundColor=Colors.red, ~color=Colors.white, ~fontFamily="Roboto-Regular.ttf", ~fontSize=24, ~marginHorizontal=12, ());

  let smallerTextStyle = Style.make(~backgroundColor=Colors.red, ~color=Colors.white, ~fontFamily="Roboto-Regular.ttf", ~fontSize=18, ~marginVertical=24, ());

  Window.setShouldRenderCallback(w, () => true);

  Window.setRenderCallback(w, () => {
    UI.render(ui,
        <view style=(Style.make(~position=LayoutTypes.Absolute, ~bottom=50, ~top=50, ~left=50, ~right=50, ~backgroundColor=Colors.blue, ()))>
            <view style=(Style.make(~position=LayoutTypes.Absolute, ~bottom=0, ~width=10, ~height=10, ~backgroundColor=Colors.red, ())) />
            <view style=(Style.make(~width=128, ~height=64, ~transform=[RotateX(Angle.from_radians(Time.getElapsedTime()))], ())) />
            <text style=(textHeaderStyle)>"Hello World!"</text>
            <text style=(smallerTextStyle)>"Welcome to revery"</text>
            <view style=(Style.make(~width=25, ~height=25, ~backgroundColor=Colors.green, ())) />
        </view>);
  });
};

App.start(init);

Note that this is the exact same example as Bin.re, just with the tag changed to .

(Btw thanks for this project, I've been waiting for something like this to come along for ages... would love to help out on some of the work for this library)

UI Styles: Implement 'overflow: hidden'

The overflow: hidden style is important, and will be useful as we start implementing scrollable widgets.

A couple things we'd need to do:

  • Add an 'overflow' property to the style (and pass it to flex)
  • When overflow hidden is set, we'd want to use glScissor to clip the rendering region to the widgets bounds. We might have to take extra care in the transform case such that we only scissor the axis-aligned bounding box.

Unable to run esy build on mac

    /Users/Bret/.esy/3____________________________________________________________________/b/esy_freetype2-2.9.1001-4cd9f534
    Using compiler: gcc
    include...
    .
    ..
    freetype
    ft2build.h
    lib..
    .
    ..
    cmake
    libfreetype.a
    pkgconfig
    ld: library not found for -lpng
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    ./esy/test.sh: line 21: ./test: No such file or directory
    error: command failed: './esy/test.sh' (exited with 127)
    esy-build-package: exiting with errors above...

  building [email protected]
esy: exiting due to errors above

I'm not entirely sure if this belongs here or on the esy issues, so any guidance is appreciated

Performance: Cache results of `hb_shape` calls

As can be seen from the profile here:

image

We spend a lot of unnecessary time shaping text over and over. We should be caching or memoizing this call - the shape results are constant for a (font family, text) tuple.

Animation: Springs - 'useSpring' hook?

Inspired by react-spring - springs are excellent tools for creating an interactive and animated UI. Another great description comes from the react-motion repo.

Proposal

We could enable easy spring-based animations via a useSpring hook. There's been thinking about this already here: https://medium.com/@drcmda/hooks-in-react-spring-a-tutorial-c6c436ad7ee4

Our hook could look like:

let currentVal = useSpring({ currentValue, destinationValue, springConfiguration });

springConfiguration could have configurable properties:

  • stiffness
  • damping

And, like react-motion, it would be helpful to have a good set of presets for this.

Under the hood, we'd need to use setState to store the current position, velocity, and acceleration. We can use Hooke's Law to determine the acceleration based on the force (F=kd=ma).

We'll also need to update this every tick - we'll need to generalize our Animation framework a bit to allow for arbitrary tick functions (right now, we always call the tickAnimation function).

Animation: Add interpolation functions

Issue: Today, all our animation infrastructure uses a 'linear' easing function.

This happens in our getLocalTime method - we're implicitly using the linear easing function which is just t => t. However, linear easing is not very visually appealing - usually you want an easing style animation which more closely replicates a physical model.

The proposal is to add an additional parameter, easing to our animation options:
https://github.com/bryphe/revery/blob/e024382f2631ac2026f129c52091e410c5e7a29a/src/UI/Animation.re#L27

This would be simply a function of type easingFunction: float => float.

The user could define their own easing function, or we could have some defaults:

  • linear
  • step(boundary) would be t => t < boundary ? : 0.0 : 1.0
  • quadratic would be t => t * t
  • cubic would be t => t * t * t
  • easeIn
  • easeOut
  • easeInOut

Input Events - Implement 'capture' API for Mouse events

Proposal: capture-like API for mouse events

Why?

Often for UI elements, after the initial mousedown, the component needs to track the mouse movement and actions exclusively. Some examples of this:

  • <Button /> - for a click event, you don't want to dispatch it immediately on the mousedown - most UIs will wait for the mouseup before dispatching. If the mouseup occurs elsewhere, the UI does not fire a click event. While in this limbo-state between mousedown and mouseup, hovering over other elements is a no-op.
  • <Slider /> and <Scrollbar /> - once a mousedown has occurred, we want to track the mouse movement, even if the mouse cursor moves away from the slider or scrollbar. We can still update the value of the slider / scrollbar, until the user releases via a mouseup event.
  • Drag and Drop - for an element that supports drag-and-drop, we want to contain the mousemove and mouseup until the drag/drop gesture has completed.

Proposal

Add a Mouse.setCapture API that could be used as follows:

/* While capturing is active, events will _only_ be forwarded to these handlers */
Mouse.setCapture(~onMouseDown, ~onMouseUp, ~onMouseMove);
...
/* Release capture */
Mouse.releaseCapture();

Example usage

For a button element, we could add an onMouseDown handler that looks like this:

let onMouseDown = (evt) => {
    let noop = (_evt) => ();

    let releaseCapture = ref(None);
    let capturedMouseUp = (evt) => {
         dispatchClickEvent(evt);
         Mouse.releaseCapture();
    };
    Mouse.setCapture(~onMouseDown=noop, ~onMouseMove=noop, ~onMouseUp=capturedMouseUp);
};

The <Button /> could do extra validation - like verify the onMouseUp actually occurred over the element, or that it was within a certain distance, etc.

UI Infrastructure: Events

Right now, for our examples like AutoComplete, we just bind directly to the raw GLFW / window events.

This isn't ideal, because for things like focus management to work, we need to control the 'bubbling' of events across the node 'hierarchy'.

We should implement a handleEvent method on our Node hierarchy, and add the following event types:

  • KeyDown
  • KeyPress
  • KeyUp
  • MouseDown
  • MouseUp

These are events that we'd bubble through our node hiearchy, and then expose as handlers on the primitives / component tags.

UI Styles: Implement 'transform'

Inspired by React-native - elements should be able to be passed an arbitrary transform: https://facebook.github.io/react-native/docs/transforms

In particular, we should support the following transform types:

  • rotateZ
  • rotateY
  • rotateX
  • rotate (proxy for rotateZ)
  • scale
  • scaleX
  • scaleY
  • scaleZ
  • translateX
  • translateY

Note that in Reason, we can have a more ergonomic API by using Variants, like:
<view style=(Style.make(~transform=[TranslateX(100), Scale(0.5)]) />

Where we define a transform type like:

type transform =
| RotateZ(..)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.