Inputs and Properties analysis in runtime.
https://docs.google.com/document/d/1bYnGFl25JsPaxD01caFqU0imqOJLifuMT_wrQpbGNuc/edit?usp=sharing
Cases to test (in "integration tests"):
both get_input and get_property referenced in:
node properties
node interface operation inputs
relationship interface operation inputs
instance deploy count
outputs
node properties in a scaling group
Test by making a deployment update (AND a deployment modification)
Additionally test a nested get-property+get-input (just node properties is enough).
Example test procedure:
With this example blueprint: https://github.com/cloudify-cosmo/cloudify-manager/blob/6fced0ad54a338e6116dbe9c9476f2e625417c4d/tests/integration_tests/resources/dsl/deployment_update_functions.yaml
The operation functions referenced in the blueprint are https://github.com/cloudify-cosmo/cloudify-manager/blob/6fced0ad54a338e6116dbe9c9476f2e625417c4d/tests/integration_tests_plugins/testmockoperations/tasks.py#L508-L534
These operations will store the values from all the tested sources into runtime properties, for verification. In other words, it will copy the node's property into runtime properties, and a property that comes from get_input, and an operation input, and a relationship operation's input.
Those procedures are perfomed in the integration tests: agentless_tests/test_runtime_functions.py
Procedure 1: (new workflow test)
1. Create the deployment with inputs: input1="aaa", fail_create=false
2. run the install workflow
3. View node1 instance's runtime properties and check that all runtime properties are "aaa". View outputs, and check that they are all "aaa" as well
4. Run deployment update with a new blueprint that changes the hardcoded "aaa" property to "bbb", and also that changes the input1 input to "bbb"
5. Perform the same verification as in step 3, but check that they're now "bbb". If the functions weren't re-evaluated at runtime, the values will stay "aaa".
Procedure 2: (resume workflow test)
1. Create the deployment with inputs: input1="aaa", fail_create=true
2. run the install workflow, which will fail due to fail_create
3. Run deployment update with a new blueprint that changes the hardcoded "aaa" property to "bbb", and also that changes the input1 input to "bbb", and also changes the fail_create input to false
4. Resume the failed install execution.
5. Perform the same verification as in procedure 1 step 5. Additionally, if the inputs weren't re-evaluated at runtime, then the workflow will not succeed at all.
Additionally, the automated tests also check those procedures but instead of running deployment update, they also edit the input values in the database directly, skipping the deployment-update procedure altogether.
There is no public API to do that, and so manual tests can't be performed reliably. However this way we are sure that the values are re-evaluated at runtime, not because of deployment update.
Instance count test is skipped because deployment update cannot change that. The scale workflow must still be used to change instance counts.
Deployment modifications are skipped entirely, because they can't change inputs, so are irrelevant to this after all. Only
thing they can do is set the instance count to an explicit number.