Adds Coriolis integration tests#413
Conversation
89ed27b to
0f365df
Compare
|
|
||
| def _make_replicator(self, pkey_path, event_mgr, volumes_info, repl_state): | ||
| pkey = paramiko.RSAKey.from_private_key_file(pkey_path) | ||
| conn_info = { |
There was a problem hiding this comment.
So we have a test provider that points to localhost instead of a minion vm. This is very likely to pollute the test environment.
Since we're planning to use a container or vm in a subsequent PR, I won't block it.
| # SQLite does not support bulk UPDATE ... FROM ... for | ||
| # joined-table-inheritance models; replace the production function | ||
| # with a per-object alternative for the lifetime of this process. | ||
| db_api._delete_transfer_action = _sqlite_delete_transfer_action |
There was a problem hiding this comment.
We're using quite a few hacks in order to run the Coriolis services manually, without an amqp broker or mysql database (using sqlite). It also skips the standard service launch commands, which would go untested.
On the other hand, it allows us to run the integration tests with a minimal configuration.
I'd say it's a reasonable compromise.
There was a problem hiding this comment.
Right. Running with other services might be interesting as well, unintended interactions / issues could be caught doing so. Maybe some time in the future we'd set those up as well. But for the time being, these are sufficient, it allows us to test the project internals.
Though, the sqlite issue could eventually be addressed, if we'd want to add support for that, but it's beyond the scope of this PR.
Introduces the skeleton for integration tests that exercise the full Coriolis migration pipeline without requiring RabbitMQ, Keystone, or Barbican. - eventlet.monkey_patch() needs to be called (also called in coriolis/cmd/__init__.py). - use `fake://` messaging_transport_url, effectively using an in-process fake messaging queue, instead of relying on an external service. - use a no-auth middleware, removing the need for Keystone. - use a test APIRouter, which won't have the /project_id/ prefix. - use an in-process worker server, which will not spawn subprocesses. This is needed; otherwise Coriolis would hang waiting for workers to reply (the messaging transport url is fake). - Uses python-coriolisclient to send requests to Coriolis. - Adds a few smoke tests, which lightly exercises the Coriolis API, ensuring the harness works. - Adds integration entry in tox (sudo tox -e integration)
Adds TestExportProvider and TestImportProvider. Implements BaseReplicaExportProvider / BaseReplicaExportValidationProvider backed by a local scsi_debug block device. Implements BaseReplicaImportProvider backed by pre-allocated scsi_debug block devices.
Adds shared utilities for the scsi_debug-backed test provider. Adds Replica transfer tests, including incremental replica transfer. Adds Replica deployment test. Adds transfer failure test.
Adds `CORIOLIS_TEST_SSH_KEY_PATH` configuration option for the integration tests.
0f365df to
ce4ba79
Compare
Introduces the skeleton for integration tests that exercise the full Coriolis migration pipeline without requiring RabbitMQ, Keystone, or Barbican.
eventlet.monkey_patch()needs to be called (also called incoriolis/cmd/__init__.py).fake://messaging_transport_url, effectively using an in-process fake messaging queue, instead of relying on an external service.python-coriolisclientto send requests to Coriolis.TestProvider, which implements the base provider interfaces.sudo tox -e integration)