Git Product home page Git Product logo

Comments (27)

kirbysayshi avatar kirbysayshi commented on August 28, 2024

Well frankly this is super exciting. (!)

I have a few concerns:

  1. I never want to truly tie Vash to a DOM
  2. I want to keep Vash as simple as possible, which so far has meant doing things at runtime as opposed to compile-time (for example, loading blocks/partials when requested instead of when requested during compilation)
  3. I worry that in order to make the syntax as clean as possible it may violate #1 and #2

So, having said those things, let's try it!

If I'm understanding you correctly, the code snippet you posted is working right now? Are you using the new helper api / system I introduced in Vash ~0.5 for buffering, or something you rolled on your own?

In a perfect world, what kind of API and functionality are you expecting or think would be best? It might be a good place to start, given that you have a working example that uses the current system with relatively verbose syntax.

I could see a special-cased version of your snippet:

@html.live( model, "time" )

Where if you don't pass a callback/content block, it assumes you only want to output the changed value without extra markup. But I haven't used live binding enough to know the common use cases.

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

I'm using the new buffer helper system, yes. It was an (almost) perfect for this. My changes are not yet published though: The current code I have is still strongly tied to the CanJS implementation of observable models and that first needs to be decoupled. I just wanted to gauge your interest before investing time in doing so.

I'm not envisioning much additional API above what you are already suggesting: a simplified function signature which assumes you only output the changed value. KISS is very important here, imho.

Currently I use the buffer helper to wrap content in a <span> element which is given a unique (auto-incrementing) ID. This ID could later be used to look the element back up (through document.getElementById for instance) and set updated html.

The neat part is: this all works without any real core changes to Vash. The only change is explicitly using helpers.__vo.push instead of __vo.push, since helpers.__vo can be reached from the outside and can thus be blanked. It could be made a lot nicer if we would set up a stack of __vo buffers and the ability to push/pop this stack:

A buffered helper could push a new buffer onto the stack, do whatever it needs to do and have it collate into this new buffer, pop the new buffer off the stack, join it as a string and then push the joined string onto the original buffer. Fairly easy, but a lot more clean.

The not-so-neat part is the additional span. We'd need to actually dive a lot further into the AST to find out if there is already an HTML tag present onto which we can latch an ID for the live binding. I'm still investigating feasability here.

The company I work for has opened up an organization account w/ Github to share feature improvements and bug fixes back to the community. I'll be sending you a pull request as soon as I get the time to straighten out the code and get everything submitted. (Hopefully will find the time to do so over the evenings this coming week.)

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

I'm not sure what you mean by a stack of __vo buffers, could you elaborate? Was a code sample supposed to make it through that didn't?

If I'm understanding correctly, what you're talking about doing is similar to what the highlight helper demo does using this.buffer.mark()... are you suggesting a cleaner way of doing this? Or perhaps expanding the current "buffer api"? I envisioned helpers not really using helpers.__vo, because of how error prone it is, and instead provided the buffer api.

Regarding the additional span, that's a lot more difficult, because at the time the helpers are called, there is no AST, just entries in the buffer. In addition, the HTML parsing is minimal, meaning an AST node that represents an opening html tag is simply a string of html (no DOM-like things like attributes, className, id, etc). Off the top of my head, there is a way around all of this, but it's a little hacky.

One idea is for vQuery to provide a way to serialize an AST, so it can be sent client-side, reconstituted into a live structure, and then registered as the AST for a particular template. In the compiler, when a node is visited, it could attach a unique ID to each node. There would also need to be instrumentation like __vnode = {astNodeUniqueId} added after each node. Then a helper could, at runtime, match where rendering is at against the AST, and query information from it (such as closest html opening tag), as well as update it when addtions were made.

Keep in mind that adding ids to existing html tags is pretty error prone, because an id may already be defined... might want to think about data-* attributes...

I just had another idea...

@html.live could return a straight string of html, something like <span id="vdata-27"></span>. It would also "register" the arguments passed to it, as well as the callback results. Then, the system used to actually render everything could do straight dom replacements after the rendered template is added to the DOM. There are complexities in buffers + callback side effects, but this might be the easiest way.

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

Regarding the stack of __vo buffers:

it's very similar to the current situation with grabbing an index with mark, and later splicing off the tail portion of the buffer with fromMark, but formalized as an actual stack.

E.g.

vash.helpers.example = function( model, fn ) {
  // Set a new buffer at the top of the stack.
  this.buffer.push();

  // Output flows into the new buffer at the top of the stack.
  fn( model ); 

  // Pops the top buffer and returns it as a concatenated string, restores
  // underlying buffer as the one receiving input.
  var str = this.buffer.pop();

  // Return result.
  return str;
}

But then I realized that this conflicts with the regular push/pop semantics of a JS array for appending/removing items and that it would limit your post-processing options in adding additional HTML before/after the piece of buffer handled by the helper, so: meh, probably best to just keep using the mark and fromMark methods instead.

Regarding the additional wrapping HTML elements:

I was already aiming for the straight string of HTML and always wrapping in a new tag for simplicity's sake. It's just that there could be issues with requiring a <span> or a <div> as the wrapping element on which the ID attribute resides, depending on inline / block contexts and allowed tags inside. That is where an AST from which to grab contextual information regarding the surrounding HTML tags could come in handy.

Also, if you add a semi-unique namespaced prefix onto an auto-incrementing number you should get a reasonably safe ID that shouldn't lead to duplicates. Something like <span id="vash-live-27"></span> should be good. I wouldn't worry about that.

Maybe we could circumvent the inline/block dillema by generating empty boundary indicators such as:

<span id="vash-live-start-27"></span>
<div>Original markup that should be live-bound goes here</div>
<div>And here</div>
<span>Or maybe here</span>
<span id="vash-live-end-27"></span>

Then leave construction of the proper DOM replacement logic (matching and replacing tags between these boundaries) up to the third-party integration layer for whatever (pairing of) MV* and DOM library you are using clientside. E.g. with jQuery you get something like:

$( "#vash-live-start-27" )
  .nextUntil( "#vash-live-end-27" )
    .remove()
    .end()
  .append( html );

Regarding separation of DOM manipulation & model implementations from Vash:

I did a bit of sketching and after some more thought, I think I have some ideas on how to separate the required event binding to models and DOM manipulations for insertion of live re-rendered content from the actual templating/rendering logic handled by Vash.

Basically; html.live should store a hash/literal somewhere that maps IDs to a set of parameters that constitutes the original arguments passed in a call to html.live, plus a function that can be called to re-render the html text contained in that particular html.live call.

The agreed upon contract for html.live would look something like this:

@* Full signature; uses `model` to re-render rich markup, whenever `model.propA` changes *@
@html.live( model, "propA", function( model ) {
  <strong>@model.propA</strong>
})

@* Shorthand; re-renders only `model.propB` as direct text element, whenever it changes. *@
@html.live( model, "propB" )

And somewhere a literal would be built up and stored that would structurally resemble:

{
  "vash-live-27" : { model : {...} , prop : "propA" , renderer : fn( model ){ ... }},
  "vash-live-28" : { model : {...} , prop : "propB" , renderer : fn( model ){ ... }}
}

This kind of structure could be used to set up the live model bindings externally from Vash. CanJS, Backbone, Knockout or any other observable models framework can be used to listen for changes of property prop on model. When a change happens, jQuery, Zepto, Mootools, or any DOM manipulation framework or just the plain DOM can be used to look up the element with the ID and replace its innerHTML with the result of calling renderer with model.

This should also keep Vash easily unit-testable in Node as it won't need to concern itself with events and observable models, or DOM manipulations. All it should do is expose the above structure somewhere for a third party to pick up and integrate live bindings on.

I'm still thinking of a good publicly reachable place to fit these produced mappings on individual template instances...

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

Ok, let's leave the buffer API as is for now then.

Regarding Wrapping

Inserting tags as "markers" seems fragile, especially given that jQuery snippet of nextUntil (didn't even know about that one!). Having said that, it's the same choice I made when making Citational for marking the beginning and end of the quoted content!

Have you considered using a custom element? Something like <vdb id="vash-27">? Then you wouldn't have to worry about block/inline semantics. My only concern is older browsers, like IE8. I think everything would be fine, as the HTML5Shiv effectively uses "custom elements" to make HTML5 elements styleable.

This is only an issue with "complex" bindings, since simple (shorthand) bindings could be fairly-certainly wrapped with a span tag.

XPath might be another alternative to avoid inserting elements (Firebug has a good implementation).

Regarding @html.live

What do you need from Vash/me for this? It seems like it's possible to implement the whole thing using a helper. As to where to store this structure, vash.helpers.live.bindings (or something similar) could work.

The "connector" to other libraries still seems like an issue. Something needs to know about the other, which means some kind of glue. As I think about it more, this sounds like a relatively complex bit of code, even with the bindings exposed as you suggested. I was thinking of how this would work in a no-fuss way with Backbone, and an answer did not immediately come to mind. I assume you have an implementation for CanJS?

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

Your suggestion to use custom tags is something I've also considered. Internet Explorer would still have problems with it when combined with setting content through the innerHTML property though, unless precautions are made with an additional shiv to handle that case.

We could also use xml namespacing (e.g. <vash:live id="..."> ...</vash:live>) at which point Internet Explorer all the way down to IE6 (and probably even before that) will happily accept the markup as syntactically valid without further intervention. Not sure if something like <table><vash:live><tr>...</tr></vash:live></table> would work though. Chances are that this specific scenario would still break as tables are kind of special in the DOM.

Another option we have is to allow explicit marking of the element that will serve as the live bound container. E.g. you could do something like:

@html.live( model, "prop", function( model ) {
  <div @html.live() >
    <span>@model.prop</span>
  </div>
})

I'm working on hammering out more details and abstracting out the current hard dependencies on jQuery / CanJS in my current working version. I'll have more on that soon, but I'm running into a few snags with re-entrency that I have to solve first:

Vash currently uses one shared vash.helpers literal and this messes with state when a template is partially re-run for a live binding. I'm going to see if I can work around that by creating a Helpers class and passing a fresh instance of said class into each run of a template instead. For extensibility with more helpers, you can then expose Helpers.prototype as the public vash.helpers.

Luckily this still won't require any fundamental change to the lexer or parser, so it's still not that invasive a change. Also; such per-instance helpers give us a place to store and expose created live bindings, neatly covering that issue as well.

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

I was looking at Ember's mustache binding code for the first time. (wow, what a complex library)

They do some crazy things, like parse object "paths": {{ someobj.prop[0].what }} gets turned into {{ bind "someobj.prop[0].what" }}, where bind is a helper that accepts one argument, a string path. It also creates a binding object that can be used to dispatch and observe.

They parse the path and convert it to a "dasherized" class name, which is then used to later locate the binding location in the DOM. So the helper would return something like <span class="someobj-prop-0-what">...</span>.

They also have a concept of sub/child views that are aware that they are children, and whose parent knows which children it has. So if a child is told to rerender, it tells all of it's children to rerender.

So that's interesting, maybe you'd already seen it.

Helpers as a Class

I'm obviously not privy to your code and implementation, but I would rather not have helpers become a class with a prototype. I feel that vash as a whole should be stateless, and so should the helpers. This is why Vash's runtime is extremely minimal. Requiring an instance of something to be passed into a template for rendering also disallows precompiled templates, which is an important feature to me. It's something that sets Vash apart from other razor-based libraries, and is important in a complex application client-side application. A template should be transparent: if you toString it, you should see exactly what's going to happen to the furthest extent possible.

I don't mean to discourage you, but please keep these ideas in mind (or tell me why I'm wrong! :) ) while you're working this out.

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

I know about keeping Vash as lean, efficient and adaptive as possible. You expressed concern about that before and my work tries to honor it as much as possible.

I actually finished my refactoring last friday and my changes can be summarized as follows:

The current situation in the official Vash build is that a reference to the static literal vash.helpers is picked up in the template and a reference to the __vo array is copied onto it for use by helpers such as buffer. I changed this to instantiate a fresh instance of a vash.Helpers class, which carries an isolated __vo array. The template's local __vo variable now aliases this buffer, instead of the other way around.

As you can see, the impact is really quite isolated and minimal. It does however remove the shared state on the __vo buffer, which was causing problems with re-entrancy and live binding. (It will also work with precompiled templates just fine and ofcourse passes all unit tests.)

I'm moving on to separating my live binding implementation from CanJS's can.Observe class now.

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

Cool, can't wait to see what you come up with! I'm not entire sure the magnitude of the vash.Helpers class, but I'm sure I will once you post some code. Either way, glad to hear you're making progress!

I'm thinking about adding in either callbacks or a simple evented pattern for being able to hook into template compilation and rendering. I haven't made any decisions yet, but let me know if that would help you.

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

I found a bug with the current layout helpers, where if you call html.include more than once, subsequent renders will be discarded, due to.... the global helpers instance! :)

So now I'm super eager to see what you come up with for this Helpers class. I added some tests for this to the suite, 29d42d7, hope this helps your work.

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

Ok. I'll try to merge that into my dev branch and see if it passes the test.
(Would be kind of cool if it does. ^_^ )

Will take a bit of time to get going on that though; iteration deadline coming up and such. Probably I'll work on it in my off time in the evening.

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

So I just pushed a fix for the bug I mentioned, and updated some of the tests. The compiler code was pretty heavily modified, switching from primarily buffer.push calls to a nicer-looking (in my opinion) "template" approach. The logic is mostly the same, save for one change. A property called __vexecuting on vash.helpers is used to know if a template is the "root" template or not during this render sequence. If the template is the root, then when it is finished executing it clears the buffer and marks execution as finished.

This change keeps everything working (better than before, since there were bugs :) ), but definitely only works in a single-threaded environment, since it's all global state. I'm wondering if it's time to build something like a "render context", which sounds a little bit like what your Helper class might be approaching.

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

👍
It is indeed more or less what my modifications that turn vash.helpers into the vash.Helpers class are doing.

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

I took a crack at this whole rendering context thing, and pushed my branch: https://github.com/kirbysayshi/vash/tree/tplctx

I realize that this is duplicate work, but I wanted to make my own attempt to force deep thought. I left a fairly detailed commit message (c179c36), so I won't repeat it here. I'm also not sure this actually solves your problem. There are a few things in there, like better toString support and a more robust Mark API that I'll probably keep regardless.

So I'm curious what you think, as well as how this compares to what you were working on in terms of complication and functionality.

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

Ah snap. Just finished the first part of the pull request after merging all the changes you made on the master branch. Hadn't noticed your updates on the tplctx branch.

The actual conversion to the class approach seems to coincide for a great deal with the approach that I submitted and is mostly compatible. The main difference seems to be that in your case you have a __vtctx rendering context that is also separate from Helpers, which is something I don't do for my employer's branch.

I like the new approach of using the special VTMark symbols in the output buffer. I'd combine the mark symbol and the index approach though, for the average case speed improvement. First use a direct index lookup based on the old index (a fast O(1) operation) and only in the off event that it no longer holds the correct VTMark symbol, look up the mark using indexOf (a slower O(n) operation, esp. in older IE without a native indexOf ).

Not too sure about flattening the buffer operations onto the helpers prototype. It may make them more accessible, but it feels a bit like clutter. Also, for safety and to prevent tampering; I wouldn't expose vout as an actual member on the Helpers instance. In case of the buffer and its operations, I'd still favor the approach of an isolated module that uses a 'private by closure' variable, per the implementation in my pull request.

On the whole, I'm leaning towards your implementation for the updated marks (with the mentioned speed boost applied) and helpers class, but towards mine on handling the internal buffer state. What do you think?

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

I think we agree on most everything here!

The VTMark symbol actually does use your speed improvement, if I'm understanding you correctly. On this line the longer O(n) operation is only performed if the direct lookup fails. Unless it doesn't behave how I think... I kinda fired most of that code off without testing!

I added another commit to the branch around yesterday, where I removed the whole __vtctx thing, and greatly simplified the template. So ours are basically the same. The only difference there is that my branch reaches out to global vash.Helpers to avoid link stuff. While I like that this simplifies precompilation, it's probably not great to rely on the global vash.

Flattening the buffer ops was because I was thinking of it as a "render context that happened to be named Helpers". :) I think your implementation of Buffer is better.

I think your implementation of hiding the vout equivalent is great, provided we cache those functions. That won't prevent me from merging the pull request though.

So once we finalize the merge, I'll port over VTMark stuff using your changes as the primary base.

One thing I thought was interesting about the branch was that the rendered template returns the Helpers instance, which when coerced to a string dumps the buffer. This allows for some interesting things, like having the context implement a promise API as mentioned in #11.

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

The VTMark symbol actually does use your speed improvement, if I'm understanding you correctly. On this line the longer O(n) operation is only performed if the direct lookup fails.

Heh. Well look at that. I must've totally missed that when I took a look last evening. Indeed; it already is using the speed up. That's great.

I think your implementation of hiding the vout equivalent is great, provided we cache those functions.

Yeah, I kind of haven't optimized everything yet while it is still 'in a state of flux', so to say. ;-)

One thing I thought was interesting about the branch was that the rendered template returns the Helpers instance, which when coerced to a string dumps the buffer. This allows for some interesting things, like having the context implement a promise API as mentioned in #11.

Oooh! That is interesting.

Promises and delayed evaluation was on my list of things to try and build into Vash as well. (You know; it could solve the problem with sub/ancestor-templates in layout composition helpers needing to be available synchronous as well.)

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

The layout stuff is already pretty weird, or at least feels that way due to the system working in both the browser and node.

Right now there's actually a bug regarding multiple template inheritance that I think can be fixed by clever VTMark manipulation. If you have a chain of templates, like so: layout.vash > page.vash > inner.vash, and each has a block declaration, then contents from inner.vash will be placed into the block declaration within layout.vash, yet the content surround the block declaration in inner.vash will still be rendered. Except that now it's out of order.

// layout.vash
@html.block('yes')

// page.vash
@html.extend('layout', function(model){
    <div class="wrapper">@html.block('yes', function(model){ <p>Default content</p> })
})

// inner.vash
@html.extend('page', function(model){
    @html.body('yes', function(){ <p>Indeed!</p> })
})

Output will be:

<div class="wrapper"></div>
<p>Indeed!</p>

Where I think most people would expect:

<div class="wrapper"><p>Indeed!</p></div>

VTMarks are interesting, because they are kind of like promises. Not all of them, but using them as a placeholder for something else is definitely promise-like. With asynchronous template loading, it gets really tricky because the engine has no way of knowing if there will be promises loaded in by a sub or parent template. So it might be impossible to know when to fulfill the primary promise on a simple template.

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

With asynchronous template loading, it gets really tricky because the engine has no way of knowing if there will be promises loaded in by a sub or parent template. So it might be impossible to know when to fulfill the primary promise on a simple template.

A straightforward solution to predicatability is to always return a promise for any operation that may potentially be async. Said promise could be resolved either immediately (if the actual operation completes synchronously), or at a later point in time.

Welding it to the current layout / master-page / extend mechanism's control flow is more tricky. You'd probably have to change some things there. I do have some ideas that could work, but I'd need time to flesh out the concept before I can present it. (It may also mean that you can get rid of the nested callback architecture...)

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

I just can't imagine generated code like this:

var __vtemp = model.everyExpressionBasically;
html.buffer.push( __vtemp && __vtemp.promise ? ( html.promises.push( __vtemp.promise ) && __vtemp.promise : __vtemp );

Or rather can't imagine it for every expression.

<secret type="vash history">I attempted a similar pattern when I was trying to make the compiler generate "concatenated" templates: __vo += model.property vs __vo.push( model.property ). Allowing for callbacks and function calls makes it behave oddly, because something like __vo += "markup" + model.forEach(function(x){ __vo += x }) is not going to output like the user would expect. So every expression was first assigned to a temporary variable to see if there was a return value, and fix the order.</secret type="vash history">

Layout:

One thing I debated was doing it how Jade works: resolve blocks at compile time. But that's a huge architecture change, going almost all the way to the lexer. It would require the parser to load files, spawn lexers + compilers, and might require the parser to actually be a parser and compose complex tokens, like LAYOUT_BLOCK which are then processed by another compiler.

from vash.

SLaks avatar SLaks commented on August 28, 2024

I haven't read this whole discussion, but the ideal way to do layouts with sections is probably closer to the way ASP.Net WebPages does it. (see the source)

Each Razor template is compiled into a class that inherits the WebPageBase class.

When rendering the page, the system maintains a stack of contexts to handle layout pages.

This is built on the ability to put a chunk of template into an anonymous method (lambda / function expression) that returns the rendered content. By contrast, Vash (like ASPX files) currently can only make functions that render their content to the current position in the stream. However, since the function's caller can still extract that content, this shouldn't be an issue.

Section contents would be compiled (in content pages) into calls that look like

html.defineSection("someName", function() { return content; });

This function would add the function to a private map of section definitions within the topmost context on the stack.

After rendering the entire content page, the base render() method would check whether the layout property has been set.
If there is a layout, it will put the entire rendered body onto the topmost context, then push a new context onto the stack to render the layout page.

From within the layout page, the renderSection() method would check the second-to-top entry on the context stack to find the section defined by the page that called the layout page. The renderBody() would similarly get the body text from the second-to-top context.

This architecture allows the layout page to have its own layout without interfering at all with the inner content page. Sections from each level of layout would occupy different contexts on the stack and would not affect each-other

For more implementation details, see the source.

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

I'm going to agree with SLaks here. When I said in my previous post that I 'had some ideas that might work', I pretty much had the method employed by WebPages in mind.

from vash.

SLaks avatar SLaks commented on August 28, 2024

I think the simplest way to implement this within Vash would be to pass the context from the current template as a second parameter to the layout template.

The content template would write

@html.setLayout(someTemplate)
<div>...</div>
@html.defineSection("head", function() {
    <style>...</style>
})

The compiled method would end with

if (html._layout === null)
    return html.__vo.join('');
else
    return resolveTemplate(html._layout)(model, html);

(The html object would contain the context, meaning the sections hash and the body)

The compiled layout template would start with

function compiled(model, parentContext) {
    var html = new vash.Helpers();
    if (parentContext)
        html.parentContext = parentContext;  // This is being used as a layout template

    html.__vo.push(...);

    if (parentContext) {
        // Verify that all sections have been rendered
        if (!html._renderBodyCalled)
            throw new Error("Layout template must call html.renderBody()");
        if (!Object.keys(parentContext._sections)
                   .every(html._renderedSections.hasOwnProperty.bind(html._renderedSections))
            throw new Error("Some sections were not rendered");
    }
    return html.__vo.join('');
}

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

I have two goals for layout:

  1. The layout helpers are completely optional. If the user doesn't need to worry about layouts, then they don't need the code. This means no hard layout-dependencies inside the compiled templates, unless layout resolution happens at compile time.
  2. Provide the necessary integration points / hooks to allow for both jade-style, WebPages-style, and whatever other types of layout strategies.

I need a lot of help with 2), and what follows is one possible idea.

It seems that the primary pain point, at least what I found while implementing the current layout helpers, is there is no way of knowing when a template has "finished" rendering. Knowing this would allow for block resolution and injection into the proper places and contexts once everything is in place instead of when layout methods are called on the fly.

One way to implement this is by allowing Helpers to become an event emitter. It could emit:

  • renderend basically right before returning, passing the context in the callback
  • renderstart ... at the start

These events would only be used internally. For example, there could be a ensureLayoutListeners method that would only bind listeners once, but would be called from every layout method.

Passing in the parentContext may also be required, so thanks for that example.

from vash.

rjgotten avatar rjgotten commented on August 28, 2024

One way to implement this is by allowing Helpers to become an event emitter.

I'd go with a callback instead of a full event stack. Seems more lightweight. Alternatively, if you're already thinking of integrating promises; use a promise and don't resolve it until the sub-template is done.

(Btw; should we maybe split this discussion into its own issue?)

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

Absolutely right, done: #15

from vash.

kirbysayshi avatar kirbysayshi commented on August 28, 2024

Not really sure where this ended up, but if there is still a live-binding integration sitting somewhere, please reopen!

from vash.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.