The plugfest/hackathon has limited time so we should be efficient and work out ahead of time what to do.
I see a tradeoff between two worthwhile objectives. One objective is to test interoperability between different implementations. This tests each implementation as well as discovers clarifications
needed in the OpenC2 specifications. A different objective is to demonstrate use case scenarios.
One example showing the different approaches would be the following use case: Prod1 is an orchestrator with OpenC2 producer capability. Cons2 is an IoT actuator with OpenC2 consumer capability. The user has the following policy for new devices entering their network based on reviewing the SBoM of the new device:
- if any software components have a pedigree/provenance to DPRK - sandbox the device is a special deception center to analyze
- if any software components have known malware - sandbox the device in a malware detonator for analysis
- if any software components have CVE's with CVSS >3 - install the device in an update DMZ and perform software updated before restarting this process
- if any software components are on a user watch list - install the device in normal network with increased firewall/ids/etc logging
- otherwise install the device normally
The set of use case scenarios is adding Cons2 to Prod1 network exercising the various legs of the policy.
Both the 'just test the commands' approach and the 'demonstrate the use case' approach involve creating scenarios of the different cases and analyzing the OpenC2 commands and responses. In the former, the scenarios inform the commands to test. In the latter, you actually exercise the decision tree.
In the 'just test the commands' approach you might send just one 'get SBoM' command and validate you can get a response. Then totally independently you might send a 'increase logging on firewall for this device' command. And independent of that you might send ...
In the 'demonstrate the use case' approach you would 'test' the orchestrator decision logic as well.
First you might send a 'get SBoM' command and arrange for the returned SBoM to appear to have software from North Korea, and then validate the orchestor did the correct followup actions.
Then you might send a 'get SBoM' command again and arrange for the returned SBoM to appear to contain known malware ...
IMHO the 'demonstrate the use case' approach requires more work in many cases than the 'just test the commands' approach. I am not against demonstrating, particularly when it's easy and efficient.
I advocate more focus on the 'just test the commands' approach to make the most use of our time. I maintain that if the commands work, logical people can infer the use case scenarios will work.