DRAFT
P9K Roadmap
Current Status
We have the original interface working using River 4.x, albeit with a few minor changes. Support for task arguments was removed. The tests were simplified. In the future, plugins should be added as separate modules. Existing plugins will be factored into separate modules as well.
Removal Of Task Arguments
Task arguments are a thing. So why remove them?
Task arguments were the result of conflating the task CLI with the plugins themselves. In cases where you're using P9K programmatically, it's probably better just to use ordinary functions. By moving the plugins into their own modules, they can be re-used without complicating P9K.
Additionally, if we decide we want to support task arguments, the CLI also needs to support them. But a more flexible alternative might be to simply provide support for a configuration file. In the meantime, environment variables can be used as a poor man's way to pass arguments.
Example: Using environment variables to pass arguments into tasks.
PANDA_RIVER_TARGET=filters p9k npm:test
To-Do
These are not listed in any particular order, and are described in more detail in the sections that follow.
Support For Parallel Tasks
Use &
to denote a task upon which we don't need to await
:
define "poem", [ "essay&", "short-story&" ], ->
which says to run essay
and short-story
without waiting for them to complete, and then run the poem
task. I think this is much more flexible than what Gulp offers, which conflates task running with function composition.
p9k-asset
The p9k-asset
package provides:
- An asset schema, including
source
and target
.
- An
create
function that takes a pathname and returns an asset object.
- A
read
function that will read an asset from the filesystem.
- A
write
function that will write an asset to the filesystem.
- An
extension
function that sets the extension for the target.
Streams
Both read
and write
are streaming operations that yield a promise when they finish. Plugins that can accept a stream can access it via the stream
property. Otherwise, they should wait
after a read or write operation (presuming they're using River, otherwise they can simply await
) and use the content
property. Conversely, write
will look for a content
or stream
property within target
.
Example: A reactive flow using streams.
go [
await glob "*.md"
map create
map read
# a streaming markdown plugin
map markdown
map write
# we want to make sure the file
# is written out before we finish
# the task
wait
]
Plugins
P9K plugins are really just panda-asset
plugins. The take an asset, update it, and return it. That's it. Often, the plugin need only take either the content
or stream
source
property, transform it, and set the appropriate target
property.
However, plugins may (due to the nature of the transformation) need to read or write to the filesystem directory, which is fine, too. All that matters is the transformation itself is fully specified in the documentation for the plugin.
The initial set of plugins include only those we need for our own projects. An alternative approach would be to support any JSTransformer, which would instantly give us dozens of plugins, including most (all?) of the ones we need.
p9k-coffee
p9k-pug
p9k-stylus
p9k-template
p9k-markdown
p9k-yaml
p9k-webpack
p9k-babel
Recursive Deletion
Just important rmr
from panda-quill
and use it in a task.
Example: Deleting files before a build.
define "clean", -> rmr target
Nested Contexts
Our Web site has a complex flow that first compiles a markdown file, compiles a corresponding YAML file, and finally compiles a Pug template based on the YAML file. Getting Gulp to support this is a hack. P9K has basically the same problem. What seems to be missing is a way to nest contexts.
In our case, we have the compiled markdown, which becomes the target's content
. But we effectively construct a second context based on the source path, by altering the extension. (What we really do is create an in-memory configuration but that's partly because there's no natural way to link data to a context. Which is ideally what we can address this way.) So far, so good.
But once we have the compiled YAML, now as the target content
, which in this case is an object, we need to pass both it and the markdown into a third context, which is the Pug template.
In theory, we could just keep updating the same context. Alter the source path. Move the parsed YAML object into a data
attribute. Add the rendered markdown to that as a markdown
property. That would work but it feels like we're missing some way to just manage multiple contexts.
This same pattern comes up when we want to associate data with a Pug template. It also comes up when we want to use Handlebars to preprocess assets.
Is Biscotti The Answer?
Or do we just need smarter plugins? For example, Biscotti, because it's full-fledged CoffeeScript, could literally just read it's own data files and run Pug templates directly from within the plugin. The problem with this is that those still feel like they should be separate plugins and there should be an easy means to effectively compose plugins.
Labeled Contexts?
One way to would be to allow the asset helpers to operate on labeled contexts. So create
could optionally take arguments: one for the label for the new context and other for label of the context from which to create it, if necessary. If there's no second argument, you just assume you're being given the path directly. read
and write
could also both take context labels.
The resulting task would look like this:
go [
await glob "**/*.md"
# label the initial context 'markdown'
map create "markdown"
map read "markdown"
map markdown
# create a second context from the first
# pass a function to get the corresponding path
map create "yaml", ({markdown}) ->
{directory, name} = markdown.source
join markdown.source.directory, "#{name}.yaml"
map read "yaml"
map yaml
# TODO: now we create a third context but this time
# again, we pass a function to grab the template path
map create "pug", ({yaml}) -> yaml.target.content.template
map read "pug"
# be sure to add the data and markdown to the pug source
tee ({markdown, yaml, pug}) ->
pug.source.data = yaml.target.content
pug.source.data.markdown = markdown.target.content
map pug
write "pug"
wait
]
Transform Helpers
For transformations that follow the typical pattern (read, transform, set the target extension, write), P9K offers helpers so that you can create tasks using plugins more easily. Both in-memory and streaming transform are supported.
Example: Pug compilation using helper.
define "pug", transform pug()
Not sure this is necessary, since the typical pattern can just be written in five lines, using pipe
.
go [
await glob "**/*.pug"
map pipe create read pug write
wait
]
Watch Task
A watch task could be added to parchment and then easily reused as needed, rather than adding it directly to P9K, just as with recursive deletion.
Web Server
We should be able to use any file-based Web server, possibly wired up to a virtual filesystem for speed.
Virtual FIle System Support
With watch
support and a Web server, it doesn't make sense to write files out to the filesystem, since it slows the build time. The are some increasingly interesting virtual filesystem projects out there that we can drop into a P9K task definition file for this. TODO: add links.