While testing interoperability of Liberty and Mitaka services I've found
two problems with backup jobs scheduling when running in non-scalable
mode (this happens when we run mixed Liberty and Mitaka c-bak services,
so during the live upgrade).
First of all when passing volume.host into scheduling we don't strip it
from backend and pool parts (@backend#pool), so in multi-backend
environment scheduling doesn't match backup host names and that clue is
silently ignored.
Second problem is that we don't pass the target volume's host into the
scheduling when restoring the volume. This means that we don't schedule
the jobs exactly the old way and it may happen that Liberty's
cinder-backup will receive restore request with a target volume he don't
have access to.
This commit sorts both problems out. Please note that this is very
low-risk, as it affects only this non-scalable mode, which without this
is broken anyway.
Change-Id: Ib7195f27a1e455732db8621cd09ba87daa5d5a33
Closes-Bug:
1558073
# This snippet should go away in Newton. Note that volume_host
# parameter will also be unnecessary then.
if not self._is_scalable_only():
+ if volume_host:
+ volume_host = volume_utils.extract_host(volume_host,
+ level='host')
if volume_host and self._is_backup_service_enabled(az,
volume_host):
return volume_host
# Setting the status here rather than setting at start and unrolling
# for each error condition, it should be a very small window
backup.host = self._get_available_backup_service_host(
- backup.host, backup.availability_zone)
+ backup.host, backup.availability_zone, volume_host=volume.host)
backup.status = fields.BackupStatus.RESTORING
backup.restore_volume_id = volume.id
backup.save()