Cancelled Scale workflow drops "started" nodes.

Description

Steps to reproduce:

1. Install Cloudify Manager 4.3 with default and external network. "default" maps to same value as "private_ip", "external" maps to same value as "public_ip".

For example, from /etc/cloudify/config.yaml:
69 networks: {default: 10.10.0.4, external: 52.191.112.237}

2. Add your AWS credentials.

3. Add the utilities plugin and awssdk plugin .

4. Install the aws-network-example blueprint.

5. Install the aws nodecellar example, this version for 4.3.

6. Check out the node-instance list for the deployment. There are two `nodejs_host` nodes.

7. Execute the scale workflow on the `nodejs_group`. (It may hang, which is a different issue with this blueprint, mainly that the agent isn't communicating anymore with the manager. This may be another issue with 4.3, or it may be in this blueprint. I am not sure. In any case, this problem led me to discover the issue that I am currently describing.)

8. During the execution after a new nodejs_host node has been created, check the node instance list. You will see three nodejs_host node instances.

9. After the worfklow as beyond starting of the nodejs_host nodek, cancel the workflow.

_An unrelated issue: if you get to this step and cancel, the uninstall workflow will be stuck in pending for some reason:

_
10. Check the node-instances of nodejs_host. The new one from the scale execution is missing! Mysterious.

Expected behavior is that all existing nodes will remain in the model, so that they can be uninstalled.

Assignee

Unassigned

Reporter

Trammell -

Labels

None

Bug Type

None

Target Version

None

Severity

None

Affects versions

Configure