Jeegn Chen [Fri, 8 May 2015 07:41:23 +0000 (15:41 +0800)]
Address 0x712d8e0e error in VNX Cinder Driver
In original implementation, 0x712d8e0e error is not considered.
Thus, some snapshot creation and volume extending operations may fail
when the system is in a transition state.
This patch is to add some waiting and retry logic when 0x712d8e0e error
comes out so that the system is more error-tolerant.
Jeegn Chen [Wed, 20 May 2015 14:00:07 +0000 (22:00 +0800)]
Add missing '-o' CLI option to VNX Cinder Driver
In some NaviSecCLI command, if -o is not given, the command may prompt
some message and ask for 'y' or 'n' for confirmation in some special
scenario, which is not working for Cinder.
This patch is to add the missing '-o' option to avoid these failures.
Eric Harney [Fri, 15 May 2015 17:23:08 +0000 (13:23 -0400)]
Add ability for drivers to copy data preserving sparseness
Drivers can set self._sparse_copy_volume_data to opt in
to copying data in a way that preserves sparseness.
This is likely to be desired elsewhere, but currently this
patch only enables it for the NFS driver and provides a
method for other drivers to opt in, in case there are
unexpected issues in some drivers.
On the NFS driver, this prevents volume migration from
changing a volume file from sparse to fully allocated.
Tom Barron [Tue, 21 Apr 2015 15:07:30 +0000 (11:07 -0400)]
Use nfs_oversub_ratio when reporting pool capacity
Currently NetApp NFS drivers do not make use of the nfs_oversub_ratio
configuration parameter to enable OpenStack administrators to report
anything other than the most conservative capacity and usage information
up to the scheduler
This commit:
* modifies the NetApp drivers to use the nfs_oversub_ratio as
documented.
* uses direct API with filers to gather capacity information rather
than stat and du commands
* brings our reporting of reserved percentage in line with the way
the scheduler actually makes use of this statistic
* simplifies and cleans up the way we gather and report pool statistics
Note that this fix addresses an in-the-field bug report from juno
and is intended to be the basis for backported fixes.
In kilo, a more general approach to overprovisioning using
max_oversubscription_ratio
was introduced via commit 3548a4bc9edbb26b248b5af5ecc2145f2c6f7481.
In this newer approach, the scheduler computes "virtual" free space
rather than the driver reporting "apparent" free space.
This fix to bring NetApp NFS drivers into conformity with the generic
NFS driver is not intended as a substitute for introducing support
in our drivers for the newer approach to overprovisioning. Rather,
it is a bug-fix for an existing failure to honor the nfs_oversub_ratio.
Mitsuhiro Tanino [Wed, 13 May 2015 15:59:13 +0000 (11:59 -0400)]
LVM: Pass volume size in MiB to copy_volume() during volume migration
Currently migrate_volume() in lvm.py does not pass volume size in
MiB but passes volume size in GiB to copy_volume(). The size argument
of copy_volume() requires MiB value. As a result, if the volume size
is 1GiB, copy_volume() copies only 1MiB head of 1GiB volume to the
destination volume, and then the volume data is corrupted.
This patch fixes to pass volume size in MiB.
LVM: Support efficient data copy using "dd" for create_cloned_volume
The create_cloned_volume uses dd command for data copy, but the
copy always copy full blocks even if the source data contains
many null or zero blocks. When we use thin provisioned LVM,
blocks are not pre-allocated, so unused region returns zero.
If we copy full block for destination volume, unnecessary blocks
will be allocated and the usage will be 100%.
The dd command has conv=sparse option in order to copy data more
efficiently. This patch enables conv=sparse option as an argument
of dd command for create_cloned_volume when we use thin provisioned
LVM.
[NOTE]
This patch only enables conv=sparse parameter of dd command for
create_cloned_volume() path of LVM driver. There are some places
using dd in Cinder, but we should carefully consider to apply
this parameter for other places because misuse of this parameter
causes security issues or data corruptions. Also we DO NOT use this
parameter for volume wiping case because the volume is not cleared
at all.
Xing Yang [Thu, 2 Apr 2015 05:31:56 +0000 (01:31 -0400)]
Fix a problem with FAST support in VMAX driver
The VMAX driver doesn't distinguish between storage groups for
masking views (used for attachment) and storage groups for volumes
under FAST policy. This causes problems during attach volume.
This patch fixed the problem.
Sean McGinnis [Sat, 25 Apr 2015 00:15:37 +0000 (19:15 -0500)]
Fix incorrect reraising of exceptions
There are several places in the code where exception handling raises
the exception variable rather than just calling raise. This results
in the traceback being incorrect.
Had considered adding a hacking check for this, but that becomes a
little tricky. There are valid places where "raise ex" is used that
would prevent a simple check.
Thang Pham [Sun, 1 Mar 2015 22:46:02 +0000 (17:46 -0500)]
Switch to oslo_versionedobjects
oslo_versionedobjects was not made available until very
late in the Kilo cycle (i.e. near the end of kilo-3).
In order to make progress on cinder objects, a fork of
nova objects was made, so that proper trial and testing
could be done. The following patch makes the switch to
use oslo_versionedobjects.
If invalid 'attachment_id' is passed to the os-force_detach
api, then it returns 500 error because even though
VolumeAttachmentNotFound exception is raised from manager,
it gets converted to RemoteError due to rpcapi call.
Caught RemoteError exception in the api and returned 400
error to the user if exception type is VolumeAttachmentNotFound
or InvalidVolume. In other cases 500 error is raised.
John Griffith [Mon, 11 May 2015 19:55:41 +0000 (13:55 -0600)]
Check volume_backend in retype
The retype command will always attempt a call
to the driver.retype method. In *most* cases this
will hit the default impl which returns False because
most drivers don't implement any retype (most, a few do).
The problem is that the drivers that do implement this in
most cases will iterate through the settings and just make
the changes that are valid and ignore the rest, and then
return True. I think this is "ok" for the drivers to do,
drivers should be allowed to be somewhat dumb WRT Cinder
state management and placement info. If we give them an
invalid command (which we're doing here) then it's on us
higher up the chain IMO.
The result is that for example if you're trying to retype
from backend-a to backend-b and backend-a implements retype
it can return True telling the manager that the volume was
succesfully retyped, even when it wasn't.
There's a lot of confusion around this bug, YES the
filter scheduler is used to determine if the retype is
valid and to what host. That's not the issue, the issue
is that regardless of the source and destination host settings
that are provided from the filter-scheduler, we always make the
call to the driver, introducing the opportunity for a false
success status being reported back.
This patch adds a very simple check between the source and
destination host settings as provided by the scheduler and in
the case that the two are "different"(not including pool designations)
we skip calling the driver.retype method altogether and fall through
to the migrate process.
This introduces a new hosts_are_equivalent method in
cinder.volume.utils
John Griffith [Tue, 12 May 2015 21:48:03 +0000 (15:48 -0600)]
Fix overwrite of params in SF image cache update
In the update cache volume routine of the SolidFire driver
we're unintentionally re-initializing the params variable
when assigning volID as opposed to just appending another key
to it.
John Griffith [Tue, 12 May 2015 14:03:27 +0000 (08:03 -0600)]
Check type match on create from source/snap
We used to allow creating from source/snap and specifying a
different type than the originating resource when doing so.
Once we started getting more drivers and more multi-backend
configurations, we implemented a check in volume.api that
took this away (broke it). There have been a number of
arguments about whether this should be allowed or not, and
that it could fail after the rpc call leaving the user with
nothing more than a "failed" volume and no explanation as to
why.
This patch allows the capability, but checks validity at the
API layer before issuing the create call. There are two
requirements for the new type specification to be valid:
1. There is only one backend (cinder-volume) topic configured
2. Both types in question specify the same volume_backend_name
If neither of these requirements are met, the user will receive
an "invalid type" error explaining that the type combination is
not compatible and that they should omit the type argument altogether.
Sean McGinnis [Tue, 12 May 2015 13:52:52 +0000 (08:52 -0500)]
Logging not using oslo.i18n guidelines (openstack)
Multi-patch set for easier chunks. This one addresses
the openstack cinder directory. That directory is synced
from oslo, so no changes made. Translation markers are
being used, so this just removes the hacking check
exclusion of that directory.
Some cleanup of a couple files are also included in this
patch for other directories that had been previously
covered.
There have been quite a few instances found where the
i18n guidelines are not being followed. I believe this
has helped lead to some of the confusion around how to
correctly do this. Other developers see this code and
assume it is an example of the correct usage.
This patch attempts to clean up most of those violations
in the existing codebase to hopefully help avoid some of
that confusion in reviews.
Some issues address:
* Correct log translation markers for different log levels
* Passing format values as arguments to call, not preformatting
* Not forcing translation via six.text_type and others
Guidelines can be found here:
http://docs.openstack.org/developer/oslo.i18n/guidelines.html
Hacking checks will not be able to identify all violations of
the guidelines, but it could be useful for catching obvious
one such as LOG.info("No markers!").
Below methods from volume utils are not using context parameter:
1. _usage_from_volume
2. _usage_from_backup
3. _usage_from_snapshot
4. _usage_from_consistencygroup
5. _usage_from_cgsnapshot
So the context parameter is removed from all of these methods.
Victor Stinner [Tue, 5 May 2015 12:50:31 +0000 (14:50 +0200)]
Replace suds test dependency with suds-jurko
suds-jurko is actively maintained (latest commit in december 2014) and supports
Python 3, whereas suds doesn't seem to be maintained anymore (last release in
2010).
suds-jurko is a drop-in replacement for suds, both packages provide the same
Python module.
ankitagrawal [Mon, 16 Mar 2015 13:26:11 +0000 (06:26 -0700)]
Fix missing translations for log messages
Fixed log translations in volume.drivers and volume.manager modules
for error, info and warning messages with appropriate marker function
according to the logging standards [1].
As LOG.warn has deprecated [2] so I have changed LOG.warn to
LOG.warning.
Wikipedia's list of common misspellings [1] has a machine-readable
version. This patch fixes those misspellings mentioned in the list
which don't have multiple right variants (as e.g. "accension", which can
be both "accession" and "ascension"), such misspellings are left
untouched. The list of changes was manually re-checked for false
positives.
GlusterFS: Renaming test case to test_mount_glusterfs
With the patch [1], GlusterFS is now using mount method in
RemoteFsClient. That method mounts a gluster share only if it has
not already been mounted.
So, having test case name as
test_mount_glusterfs_should_mount_correctly, doesn't make sense now.
This patch changes its name to 'test_mount_glusterfs'.
John Griffith [Mon, 4 May 2015 17:31:15 +0000 (11:31 -0600)]
Add new exception to retryables in SolidFire driver
Newer versions of SolidFire Element OS introduce an
internal snapshot mechanism during cloning that also
includes a new exception (xNotReadyForIO).
This can be encountered when programmatically cloning
hundreds of large volumes. Typically this will succeed
on a retry of the clone command, so we should add this
new exception to our list of retryable exceptions in
the SolidFire driver.
Tom Swanson [Thu, 7 May 2015 21:39:26 +0000 (16:39 -0500)]
Dell: Added support for update_migrated_volume
Added support for updated_migrated_volume to common driver.
Added rename_volume to dell_storagecenter_api to support it.
Added tests to test_dellsc.py and test_dellapi.py.
The only thing we do in this rename the newly migrated volume
destination volume on the Dell Storage center to be the same
as the original volume ID. Since we identify the volumes
by volume ID this prevents them from being lost.