Git Product home page Git Product logo

colorbleed-config's People

Contributors

aardschok avatar bigroy avatar mottosso avatar svenneve avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

colorbleed-config's Issues

Add custom pyblish test to fail even after validation during extraction or integration if anything errors.

Issue

Currently whenever one Extractor fails with an error the remaining Extractors will still start processing, even though Integration will still be skipped since it's checking explicitly inside the Integrator whether that have been any errors, and if so... it disallows the integration into the pipeline.

The problem with this is that the first Extractor could fail and even other long running Extractors will still try to continue, even though we know it will be useless anyway.

Solution

We can override Pyblish's behavior which only stop after validation if any errors occurred to our own test that stops in our other cases too.

For example:

import pyblish.logic
import pyblish.api

def custom_test(**vars):

    # Keep default behavior
    default_result = pyblish.logic.default_test(**vars)
    if default_result:
        return default_result

    # Add custom behavior
    # Fail on anything after validation having an error.
    after_validation = pyblish.api.ValidatorOrder + 0.5
    if any(order >= after_validation for order 
           in vars["ordersWithErrors"]):
        return "failed after validation"


pyblish.api.register_test(custom_test)

Note this would have the downside that also Cleanup would not get triggered, as such the local disk (staging dir) might get a filled up temporary folder. This could be an additional problem that might need to be taken care of...

Another workaround could be to have our Extractors also initially check whether any errors have occurred, and if so... to raise an Error themselves, yet have the Cleanup plug-in always run.

No application definiton could be found for 'fusionrendernode9'

Issue

Getting started with the current master branch might have you hit the following error:

Traceback (most recent call last):
  File "C:\path\to\avalon-setup\git\avalon-launcher\launcher\app.py", line 79, in on_object_created
    self.controller.init()
  File "C:\path\to\avalon-setup\git\avalon-launcher\launcher\control.py", line 265, in init
    actions = self.collect_compatible_actions(discovered_actions)
  File "C:\path\to\avalon-setup\git\avalon-launcher\launcher\control.py", line 479, in collect_compatible_actions
    if not Action().is_compatible(session):
  File "C:\path\to\avalon-setup\git\colorbleed-config\colorbleed\launcher_actions.py", line 21, in __init__
    self.config = lib.get_application(self.name)
  File "C:\path\to\avalon-setup\git\avalon-core\avalon\lib.py", line 158, in get_application
    "No application definition could be found for '%s'" % name
ValueError: No application definition could be found for 'fusionrendernode9'

This is likely due to the Launcher trying to find a fitting application .toml definition for the application which are stored outside of the config.

Solution

This should allow to pass in a friendly manner and describe how to resolve it in the message.

For now a workaround is to comment out the last two lines here so you have something like:

def register_launcher_actions():
    """Register specific actions which should be accessible in the launcher"""
    pass
    # pipeline.register_plugin(api.Action, FusionRenderNode)
    # pipeline.register_plugin(api.Action, VrayRenderSlave)

When you installed with avalon-setup the default application definition .toml files can be found in avalon-setup\bin.

Internal Scripts missing

Problem

# Error: ImportError: file C:\Users\admin\colorbleed-config\colorbleed\maya\menu.py line 26: No module named scriptsmenu.launchformaya # 

Seems like there are internal scripts in this config.

Automated testing

Issues

With each iteration of validators and pipeline implementations there's a lot of shifting going on of what is expected to work and what actually works. We need to lock down functionality so we can know for certain what was originally intended actually works as expected, we need automated testing!

Basically we need to:

  1. Ensure that what we consider valid, to come through in the way we expect it to come through.
  2. Ensure that what we consider invalid, to fail and be clear to the artist on what has been happening.

As we continue coding and solve problems that arise as we go forward we should enforce that it is fixed forever by implementing a test for it as well, so:

  1. Problems that came through which we now come to consider invalid, should get a test to ensure whatever fix we build will hold now and in the future.

An example would be the following condition (an arbitrary rule is shown as example):

A model publish may not have a camera in the instance.

As such we would want to test whether it is actually NOT being publish,

# psuedocode
import maya.cmds as cmds
import avalon.maya
import pyblish.util

def test_no_camera_in_model():

    cmds.file(force=True, new=True)
    transform = cmds.createNode("transform", name="camera")
    cam = cmds.createNode("camera", name="cameraShape", parent=transform)

    # create a model publish set
    avalon.maya.create("colorbleed.model")

    valid = pyblish.util.validate()
    assert not valid

The reason of it not passing might be because it doesn't have a Colorbleed id attribute, because the camera shape and transform wouldn't get one. The test would pass and production is safe! Though if ever the rules for the Colorbleed ids will change then suddenly this might pass. As such our tests will ensure our pipeline behaves the way we expect, even when we weren't expecting something to break at a point.

These tests might run through unittest or nose and could eventually be connected to a system like Travis which would allow the tests to be run automatically off-site in a dedicated environment upon each git commit!

For a reference on automated testing with avalon, see: https://github.com/mindbender-studio/config/blob/master/polly/tests.py

Increment currrent file overwrites.

Issue

When opening a previous version and publish, the current file will increment and overwrite the next version.

Solution

Check for existing files and increment to find available file names.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.