The zfssa driver doesn't handle cases where a volume may not
exist on the backend when a delete call is sent. The result
is a stack-trace in the logs, sometimes a crash of the service
and an inability to clean up volumes from Cinder's perspective.
This patch doesn't address the root cause of the issue, but
it does handle the case more gracefully by catching the
exception, logging an error and returning.
This way the volume can be cleaned up on the Cinder side and
the operator still has an indication that something went wrong.
This is a common pattern for most of the drivers in Cinder.
Change-Id: I09725b29effb79450d010949527bd54329919f52
Closes-Bug: #
1472412
"""Deletes a volume with the given volume['name']."""
LOG.debug('zfssa.delete_volume: name=%s', volume['name'])
lcfg = self.configuration
- lun2del = self.zfssa.get_lun(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- volume['name'])
+
+ try:
+ lun2del = self.zfssa.get_lun(lcfg.zfssa_pool,
+ lcfg.zfssa_project,
+ volume['name'])
+ except exception.VolumeBackendAPIException as ex:
+ # NOTE(jdg): This will log an error and continue
+ # if for some reason the volume no longer exists
+ # on the backend
+ if 'Error Getting Volume' in ex.message:
+ LOG.error(_LE("Volume ID %s was not found on "
+ "the zfssa device while attempting "
+ "delete_volume operation."), volume['id'])
+ return
+
# Delete clone temp snapshot. see create_cloned_volume()
if 'origin' in lun2del and 'id' in volume:
if lun2del['nodestroy']: