Manager status reporter user fails snapshot restore

Description

openstack_compute_instance_v2.server2 (remote-exec): _ TestManagerStatusReporter.test_status_reporter_has_correct_key_after_snapshot_restore _

openstack_compute_instance_v2.server2 (remote-exec): self = <integration_tests.tests.agentless_tests.test_status_reporter.TestManagerStatusReporter testMethod=test_status_reporter_has_correct_key_after_snapshot_restore>

openstack_compute_instance_v2.server2 (remote-exec): def test_status_reporter_has_correct_key_after_snapshot_restore(self):
openstack_compute_instance_v2.server2 (remote-exec): snapshot_id = "test_" + str(uuid4())
openstack_compute_instance_v2.server2 (remote-exec): initial_token_key, _ = self._get_reporter_token_key()
openstack_compute_instance_v2.server2 (remote-exec):
openstack_compute_instance_v2.server2 (remote-exec): execution = self.client.snapshots.create(snapshot_id, False)
openstack_compute_instance_v2.server2 (remote-exec): self.wait_for_execution_to_end(execution)
openstack_compute_instance_v2.server2 (remote-exec):
openstack_compute_instance_v2.server2 (remote-exec): self._update_reporter_token_key('a' * 32)
openstack_compute_instance_v2.server2 (remote-exec):
openstack_compute_instance_v2.server2 (remote-exec): execution = self.client.snapshots.restore(snapshot_id)
openstack_compute_instance_v2.server2 (remote-exec): # Temporary fix until is fixed.
openstack_compute_instance_v2.server2 (remote-exec): sleep(40)
openstack_compute_instance_v2.server2 (remote-exec): # At this point it should be safe to query executions.
openstack_compute_instance_v2.server2 (remote-exec): self.wait_for_execution_to_end(execution)
openstack_compute_instance_v2.server2 (remote-exec):
openstack_compute_instance_v2.server2 (remote-exec): current_token_key, reporter_id = self._get_reporter_token_key()
openstack_compute_instance_v2.server2 (remote-exec): self.assertEqual(initial_token_key, current_token_key)
openstack_compute_instance_v2.server2 (remote-exec):
openstack_compute_instance_v2.server2 (remote-exec): reporter_configuration = self._get_reporter_configuration()
openstack_compute_instance_v2.server2 (remote-exec): encoded_id = self._get_reporter_encoded_user_id(reporter_id)
openstack_compute_instance_v2.server2 (remote-exec): full_token = '{0}{1}'.format(encoded_id, initial_token_key)
openstack_compute_instance_v2.server2 (remote-exec): self.assertEqual(reporter_configuration['token'], full_token)
openstack_compute_instance_v2.server2 (remote-exec):
openstack_compute_instance_v2.server2 (remote-exec): reporter_service_status = self.execute_on_manager(
openstack_compute_instance_v2.server2 (remote-exec): "sh -c 'systemctl is-active cloudify-status-reporter || :'"
openstack_compute_instance_v2.server2 (remote-exec): ).stdout.strip()
openstack_compute_instance_v2.server2 (remote-exec): > self.assertEqual(reporter_service_status, 'active')
openstack_compute_instance_v2.server2 (remote-exec): E AssertionError: 'failed' != 'active'

openstack_compute_instance_v2.server2 (remote-exec): dev/repos/cloudify-manager/tests/integration_tests/tests/agentless_tests/test_status_reporter.py:55: AssertionError
openstack_compute_instance_v2.server2 (remote-exec): ---------- Captured log setup ----------
openstack_compute_instance_v2.server2 (remote-exec): INFO TESTENV:env.py:227 Creating testing env..
openstack_compute_instance_v2.server2 (remote-exec): INFO TESTENV:env.py:71 Setting up test environment... workdir=/tmp/cloudify-integration-tests/WorkflowsTests-xphf
openstack_compute_instance_v2.server2 (remote-exec): INFO TESTENV:env.py:96 Starting manager container
openstack_compute_instance_v2.server2 (remote-exec): INFO docl:docl.py:217 Waiting for RabbitMQ
openstack_compute_instance_v2.server2 (remote-exec): WARNING pika.connection:connection.py:1816 Could not connect, 0 attempts left
openstack_compute_instance_v2.server2 (remote-exec): ERROR pika.adapters.blocking_connection:blocking_connection.py:464 Connection open failed - 'Connection to 172.20.0.2:5671 failed: [Errno 111] Connection refused'
openstack_compute_instance_v2.server2 (remote-exec): WARNING pika.connection:connection.py:1816 Could not connect, 0 attempts left
openstack_compute_instance_v2.server2 (remote-exec): ERROR pika.adapters.blocking_connection:blocking_connection.py:464 Connection open failed - 'Connection to 172.20.0.2:5671 failed: [Errno 111] Connection refused'
openstack_compute_instance_v2.server2 (remote-exec): WARNING pika.connection:connection.py:1816 Could not connect, 0 attempts left
openstack_compute_instance_v2.server2 (remote-exec): ERROR pika.adapters.blocking_connection:blocking_connection.py:464 Connection open failed - 'Connection to 172.20.0.2:5671 failed: [Errno 111] Connection refused'
openstack_compute_instance_v2.server2 (remote-exec): INFO docl:docl.py:221 Waiting for REST service and Storage
openstack_compute_instance_v2.server2 (remote-exec): INFO docl:docl.py:226 Waiting for postgres
openstack_compute_instance_v2.server2 (remote-exec): INFO docl:docl.py:141 Container start took 20.4112689495 seconds
openstack_compute_instance_v2.server2 (remote-exec): ---------- Captured log call -----------
openstack_compute_instance_v2.server2 (remote-exec): INFO Flask Utils:flask_utils.py:83 Resetting PostgreSQL DB
openstack_compute_instance_v2.server2 (remote-exec): INFO test_status_reporter_has_correct_key_after_snapshot_restore:test_cases.py:539 Cleaning up the file system...
openstack_compute_instance_v2.server2 (remote-exec): INFO postgresqlostgresql.py:39 Trying to execute SQL query:
openstack_compute_instance_v2.server2 (remote-exec): SELECT
openstack_compute_instance_v2.server2 (remote-exec): json_build_object(
openstack_compute_instance_v2.server2 (remote-exec): 'id', id,
openstack_compute_instance_v2.server2 (remote-exec): 'username', username,
openstack_compute_instance_v2.server2 (remote-exec): 'api_token_key', api_token_key
openstack_compute_instance_v2.server2 (remote-exec): )
openstack_compute_instance_v2.server2 (remote-exec): FROM users
openstack_compute_instance_v2.server2 (remote-exec): WHERE username = 'manager_status_reporter';

openstack_compute_instance_v2.server2 (remote-exec): INFO postgresqlostgresql.py:39 Trying to execute SQL query:
openstack_compute_instance_v2.server2 (remote-exec): UPDATE users
openstack_compute_instance_v2.server2 (remote-exec): SET api_token_key = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
openstack_compute_instance_v2.server2 (remote-exec): WHERE username = 'manager_status_reporter';

openstack_compute_instance_v2.server2 (remote-exec): INFO postgresqlostgresql.py:39 Trying to execute SQL query:
openstack_compute_instance_v2.server2 (remote-exec): SELECT
openstack_compute_instance_v2.server2 (remote-exec): json_build_object(
openstack_compute_instance_v2.server2 (remote-exec): 'id', id,
openstack_compute_instance_v2.server2 (remote-exec): 'username', username,
openstack_compute_instance_v2.server2 (remote-exec): 'api_token_key', api_token_key
openstack_compute_instance_v2.server2 (remote-exec): )
openstack_compute_instance_v2.server2 (remote-exec): FROM users
openstack_compute_instance_v2.server2 (remote-exec): WHERE username = 'manager_status_reporter';

openstack_compute_instance_v2.server2 (remote-exec): INFO TESTENV:env.py:175 Shutting down all dispatch processes
openstack_compute_instance_v2.server2 (remote-exec): INFO test_status_reporter_has_correct_key_after_snapshot_restore:test_cases.py:106 Attempting to save the manager's logs...
openstack_compute_instance_v2.server2 (remote-exec): INFO test_status_reporter_has_correct_key_after_snapshot_restore:test_cases.py:123 Cloudify remote log saving path found: /tmp/1411_build_cfy_manager_logs.
openstack_compute_instance_v2.server2 (remote-exec): INFO test_status_reporter_has_correct_key_after_snapshot_restore:test_cases.py:124 If you're running via itest-runner, make sure to set a local path as well with CFY_LOGS_PATH_LOCAL.
openstack_compute_instance_v2.server2 (remote-exec): INFO test_status_reporter_has_correct_key_after_snapshot_restore:test_cases.py:137 Saving manager logs for test: test_status_reporter_has_correct_key_after_snapshot_restore...
openstack_compute_instance_v2.server2 (remote-exec): -------- Captured log teardown ---------
openstack_compute_instance_v2.server2 (remote-exec): INFO TESTENV:env.py:240 Destroying testing env..
openstack_compute_instance_v2.server2 (remote-exec): INFO TESTENV:env.py:162 Destroying test environment...
openstack_compute_instance_v2.server2 (remote-exec): ERROR pika.adapters.blocking_connection:blocking_connection.py:472 Connection close detected; result=BlockingConnection__OnClosedArgs(connection=<SelectConnection CLOSED socket=None params=<ConnectionParameters host=172.20.0.2 port=5671 virtual_host=/ ssl=True>>, reason_code=-1, reason_text="error(104, 'Connection reset by peer')")
openstack_compute_instance_v2.server2 (remote-exec): INFO TESTENV:env.py:170 Deleting test environment from: /tmp/cloudify-integration-tests/WorkflowsTests-xphf

Steps to Reproduce

Environment:
OS (CLI), HA cluster, cloud provider
------------------------------------

Steps to reproduce:
------------------
1.
2.
3.

Expected result:
---------------

Actual result:
-------------

Why Propose Close?

None

Activity

Show:
David Ginzbourg
December 8, 2019, 11:53 AM

This test will keep failing until the manager status reporter will work. As far as I understand currently the manager reporter service is in failed mode . Assuming is fixed.

 

David Ginzbourg
December 8, 2019, 5:25 PM

Running system and integration tests again on the CY-2062 branch.

David Ginzbourg
December 9, 2019, 1:11 PM

The bug is fixed, I’ll know for sure when the system tests run with a new image.

David Ginzbourg
December 10, 2019, 1:30 PM

Waiting for this build’s results.

Assignee

David Ginzbourg

Reporter

Tal Yakobovitch

Labels

None

Severity

Critical

Target Version

5.0.5

Premium Only

no

Found In Version

5.0

QA Owner

None

Bug Type

new feature bug

Customer Encountered

No

Customer Name

None

Release Notes

yes

Priority

None

Priority

Blocker
Configure