Git Product home page Git Product logo

panda-9000's Introduction

Introducing The Panda-9000

The Panda-9000, or P9K for short, is a task and dependency tool, similar to Gulp, but based on the reactive JavaScript library, Panda River.

Installation

npm i -g panda-9000

Usage

p9k [<task-name>...]

If no arguments are given, the default task name is used.

Task definitions should be placed in the tasks directory.

Defining Tasks

To define tasks, place a CoffeeScript or JavaScript file in your project's task/index.coffee or tasks/index.js file.

For example, here's a simple hello, world task.

{task} = require "panda-9000"

task "hello-world", ->
  console.log "Hello, World"

Run the task like this:

p9k hello-world

panda-9000's People

Contributors

benniemosher avatar diminish7 avatar dyoder avatar f1337 avatar freeformflow avatar jessefrye avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

panda-9000's Issues

Support wild-card notation

I'd like Panda-9000 to support wild-card task invocation. If you invoke the build task, it should be understood that all the build/* tasks are included. That is, defining a x/y task should tacitly create an x task that depends upon it. If you define an x task, you should get a warning that you've redefined a previously defined task. (Which is a feature that also needs to be added. Perhaps a separate task method can be provided to silence the warning, ex: overrideTask or something.)

Top-level task runner

We need a top-level task runner because a nested task can't see the big picture. The runner needs to be accessible from within tasks. This makes it possible for long-running tasks (ex: a file watcher) to run tasks. In fact, any dynamic execution of tasks, outside the runner, should be triggered this way.

In combination with task memoization, I think this solves all our problems. Tasks that return promises will block all dependent tasks. Other tasks won't block at all, but won't run again, either. The runner can maintain it's own memoization cache that gets cleared when the top-level task is finished.

P9K Roadmap

DRAFT

P9K Roadmap

Current Status

We have the original interface working using River 4.x, albeit with a few minor changes. Support for task arguments was removed. The tests were simplified. In the future, plugins should be added as separate modules. Existing plugins will be factored into separate modules as well.

Removal Of Task Arguments

Task arguments are a thing. So why remove them?

Task arguments were the result of conflating the task CLI with the plugins themselves. In cases where you're using P9K programmatically, it's probably better just to use ordinary functions. By moving the plugins into their own modules, they can be re-used without complicating P9K.

Additionally, if we decide we want to support task arguments, the CLI also needs to support them. But a more flexible alternative might be to simply provide support for a configuration file. In the meantime, environment variables can be used as a poor man's way to pass arguments.

Example: Using environment variables to pass arguments into tasks.
PANDA_RIVER_TARGET=filters p9k npm:test

To-Do

These are not listed in any particular order, and are described in more detail in the sections that follow.

  • Support for parallel tasks
  • p9k-asset
  • Streams
  • Plugins
    • p9k-coffee
    • p9k-pug
    • p9k-stylus
    • p9k-template
    • p9k-markdown
    • p9k-yaml
    • p9k-webpack
    • p9k-babel
    • Biscotti support?
    • JSTransformer support
  • Nested Contexts?
  • Transform Helpers
  • Watch Task
  • Virtual File System
  • Configuration File

Support For Parallel Tasks

Use & to denote a task upon which we don't need to await:

define "poem", [ "essay&", "short-story&" ], ->

which says to run essay and short-story without waiting for them to complete, and then run the poem task. I think this is much more flexible than what Gulp offers, which conflates task running with function composition.

p9k-asset

The p9k-asset package provides:

  • An asset schema, including source and target.
  • An create function that takes a pathname and returns an asset object.
  • A read function that will read an asset from the filesystem.
  • A write function that will write an asset to the filesystem.
  • An extension function that sets the extension for the target.

Streams

Both read and write are streaming operations that yield a promise when they finish. Plugins that can accept a stream can access it via the stream property. Otherwise, they should wait after a read or write operation (presuming they're using River, otherwise they can simply await) and use the content property. Conversely, write will look for a content or stream property within target.

Example: A reactive flow using streams.
go [
  await glob "*.md"
  map create
  map read
  # a streaming markdown plugin
  map markdown
  map write
  # we want to make sure the file
  # is written out before we finish
  # the task
  wait 
]

Plugins

P9K plugins are really just panda-asset plugins. The take an asset, update it, and return it. That's it. Often, the plugin need only take either the content or stream source property, transform it, and set the appropriate target property.

However, plugins may (due to the nature of the transformation) need to read or write to the filesystem directory, which is fine, too. All that matters is the transformation itself is fully specified in the documentation for the plugin.

The initial set of plugins include only those we need for our own projects. An alternative approach would be to support any JSTransformer, which would instantly give us dozens of plugins, including most (all?) of the ones we need.

p9k-coffee

p9k-pug

p9k-stylus

p9k-template

p9k-markdown

p9k-yaml

p9k-webpack

p9k-babel

Recursive Deletion

Just important rmr from panda-quill and use it in a task.

Example: Deleting files before a build.
define "clean", -> rmr target

Nested Contexts

Our Web site has a complex flow that first compiles a markdown file, compiles a corresponding YAML file, and finally compiles a Pug template based on the YAML file. Getting Gulp to support this is a hack. P9K has basically the same problem. What seems to be missing is a way to nest contexts.

In our case, we have the compiled markdown, which becomes the target's content. But we effectively construct a second context based on the source path, by altering the extension. (What we really do is create an in-memory configuration but that's partly because there's no natural way to link data to a context. Which is ideally what we can address this way.) So far, so good.

But once we have the compiled YAML, now as the target content, which in this case is an object, we need to pass both it and the markdown into a third context, which is the Pug template.

In theory, we could just keep updating the same context. Alter the source path. Move the parsed YAML object into a data attribute. Add the rendered markdown to that as a markdown property. That would work but it feels like we're missing some way to just manage multiple contexts.

This same pattern comes up when we want to associate data with a Pug template. It also comes up when we want to use Handlebars to preprocess assets.

Is Biscotti The Answer?

Or do we just need smarter plugins? For example, Biscotti, because it's full-fledged CoffeeScript, could literally just read it's own data files and run Pug templates directly from within the plugin. The problem with this is that those still feel like they should be separate plugins and there should be an easy means to effectively compose plugins.

Labeled Contexts?

One way to would be to allow the asset helpers to operate on labeled contexts. So create could optionally take arguments: one for the label for the new context and other for label of the context from which to create it, if necessary. If there's no second argument, you just assume you're being given the path directly. read and write could also both take context labels.

The resulting task would look like this:

go [
  await glob "**/*.md"
  # label the initial context 'markdown'
  map create "markdown"
  map read "markdown"
  map markdown
  # create a second context from the first
  # pass a function to get the corresponding path
  map create "yaml", ({markdown}) ->
   {directory, name} = markdown.source
    join markdown.source.directory, "#{name}.yaml"
  map read "yaml"
  map yaml
  # TODO: now we create a third context but this time
  # again, we pass a function to grab the template path
  map create "pug", ({yaml}) -> yaml.target.content.template
  map read "pug"
  # be sure to add the data and markdown to the pug source
  tee ({markdown, yaml, pug}) ->
    pug.source.data = yaml.target.content
    pug.source.data.markdown = markdown.target.content
  map pug
  write "pug"
  wait
]

Transform Helpers

For transformations that follow the typical pattern (read, transform, set the target extension, write), P9K offers helpers so that you can create tasks using plugins more easily. Both in-memory and streaming transform are supported.

Example: Pug compilation using helper.
define "pug", transform pug()

Not sure this is necessary, since the typical pattern can just be written in five lines, using pipe.

go [ 
  await glob "**/*.pug"
  map pipe create read pug write
  wait
]

Watch Task

A watch task could be added to parchment and then easily reused as needed, rather than adding it directly to P9K, just as with recursive deletion.

Web Server

We should be able to use any file-based Web server, possibly wired up to a virtual filesystem for speed.

Virtual FIle System Support

With watch support and a Web server, it doesn't make sense to write files out to the filesystem, since it slows the build time. The are some increasingly interesting virtual filesystem projects out there that we can drop into a P9K task definition file for this. TODO: add links.

Context spec needed

It's not entirely clear that the p9k interface for pipeline components is ideal. At the moment, the idea is that each component updates a context. Downstream components depend on elements being defined in the context. This is necessary to avoid losing information along the way, like the relative path (necessary for writing files). This “blackboard”-style is extremely flexible, but it's also hard to debug. One way around this is that each component simply verifies that the properties it requires are present and generates warnings if they're missing. In cases where that's what we want (that is, a no-op component in the pipeline), it's possible we could reject products that don't meet the given criteria, thus quieting the warnings.

explictly export object

You can't import p9k from "panda-9000" because it returns undefined. The export should be:

export default {run, define}`

or perhaps:

p9k = {run, define}
export default p9k

Add data to tests

Add tests for ensuring that data is compiled into templates correctly

How to trigger a reload in a helper?

Right now, helpers don't load content from the file if the source.content attribute is already set. This helps with flows where the content is set independent of the path. However, the drawback is that the content is never re-loaded. This requires an event to reset the content property. This seems to be working okay (H9's Web server simply re-runs the survey task for each request) but we may want to re-think this. It's tough to come up with an alternative without either breaking the virtual asset scenario or complicating the context object.

Add watch helper back

I got rid of the watch helper, but I need to add it back. The key is figuring out what it should actually do. What I'm doing in H9 is just watching an entire directory and then reloading all the assets. That seems to work well enough (using chokidar). But do we need a more fully-featured variant?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.