Ryan McNair [Mon, 14 Mar 2016 17:37:35 +0000 (17:37 +0000)]
Fix volume migration VolumeType exception
Since Iabf9c3fab56ffef50695ce45745f193273822b39 modified the default
loaded attributes, the obj_attr_is_set check in finish_volume_migration
is now True for VolumeType. This should not get copied to the new
volume (since it's lazy-loaded based on the volume_type_id), so
this patch adds volume_type to the list of keys to ignore. To get
test coverage, test_finish_volume_migration is updated to use the
default expected_attributes and dict comprehension is updated so
it succeeds without glance_metadata being present.
Tom Barron [Fri, 19 Feb 2016 23:19:14 +0000 (18:19 -0500)]
Fix retype failure when original has no volume type
cinder.objects.volume.Volume.finish_volume_migration()
should skip the volume_type_id field when swapping fields
between "source" and "dest" volumes. The "dest" volume
has already been created with appropriate volume_type_id
and the "source" may have not have a volume type.
This commit also changes a LOG.error() to a LOG.exception()
in where finish_volume_migration is called in order to
show the exception traceback in its calling context.
Gorka Eguileor [Mon, 14 Mar 2016 11:54:58 +0000 (12:54 +0100)]
Remove circular import to fix config generation
Patch https://review.openstack.org/278354 created a circular import when
added an "import cinder.db" line to cinder/api/common.py since
cinder/db/api.py imports cinder.api.common back, which breaks genconfig.
Patrick East [Fri, 11 Mar 2016 23:28:27 +0000 (15:28 -0800)]
Fixup stats key for replication in init_host_with_pc
We were looking for just ‘replication’ in stats, but that key isn’t
used by anyone and would always be false. We instead want to be keying
off of just replication_enabled.
Vincent Hou [Fri, 11 Mar 2016 21:25:18 +0000 (16:25 -0500)]
Add the key 'replication' and set the correct 'replication_status'
This patch fixes the minor issues following the implementation
of replication v2.1 for storwize driver. It adds the key
'replication' to make sure of that the cinder-volume service
can have a successful restart and sets the 'replication_status
'back to enabled for failback.
Doug Hellmann [Fri, 11 Mar 2016 21:06:11 +0000 (16:06 -0500)]
register the config generator default hook with the right name
No config generator hooks should ever be registered with a name that
belongs to another project. In this case, using oslo.middleware.cors
means that *every other project* that loads the middleware gets this
application's defaults when the generator is run on a system with
everything installed (such as a dev box with devstack). Use the name
of the app instead, to ensure that the defaults are only set when this
app's sample config and documentation are being generated.
Patrick East [Fri, 11 Mar 2016 18:27:50 +0000 (10:27 -0800)]
Fix for Pure drivers not checking full client version
We were only looking at major and minor versions… we needed to be
looking at the whole thing. This change allows for checking against
all three version parts, and for even differing schemes as time goes on.
Patrick East [Fri, 11 Mar 2016 17:59:56 +0000 (09:59 -0800)]
Fixup for Pure drivers cheesecake replication setup
We had a remnant of the past replication design left behind in the setup
methods which was causing the arrays to be setup with bi-directional
replication. With the switch to cheesecake we do not need the secondary
arrays to be replicating to the primary. This change removes the step
which would configure that.
Ivan Kolodyazhny [Fri, 11 Mar 2016 16:14:06 +0000 (18:14 +0200)]
Remove empty directories and unused files from unit tests
1. Files from cinder/tests/unit/var are not used anywhere
2. cinder/tests/unit/wsgi/ and cinder/tests/unit/xenapi/ contain only
empty __init__.py files
Alex O'Rourke [Fri, 11 Mar 2016 00:09:01 +0000 (16:09 -0800)]
Cleanup 3PAR/LeftHand failover_host exceptions
The 3PAR and LeftHand drivers were raising the wrong exception,
VolumeBackendAPIException, instead of InvalidReplicationTarget,
which was putting the host in an unrecoverable state,
Dinesh Bhor [Tue, 9 Feb 2016 11:20:26 +0000 (03:20 -0800)]
Fix 500 error if 'offset' is out of range
If large value is passed as offset to snapshots, volumes,
backups, consistencygroups and qos_specs list api then it
throws 500 internal server error.
Moved existing validate_integer() method from
cinder.api.openstack.wsgi.Controller to cinder.utils so
that it can also be used for validating offset param for
integer value in _get_offset_param() method and return 400
error if value is out of range.
Alex O'Rourke [Thu, 10 Mar 2016 21:33:20 +0000 (13:33 -0800)]
Fix up failover_host exceptions to preserve states
UnableToFailOver and InvalidReplicationTarget do not save the state
of replication_status currently. This patch adds host.save() in
order to write the change to the db.
In addition, the manager should honor the current replication state
of the host when InvalidReplicationTarget is raised instead of forcing
it into 'enabled' state.
Helen Walsh [Sat, 13 Feb 2016 12:12:47 +0000 (12:12 +0000)]
EMC VMAX - SnapVX and other snapshot improvements
Using snapVX VMAX3 for creating snapshot and volume from
snapshot. Improvements in wait_for_sync to determine when
the max number of retries has been reached.
John Griffith [Wed, 9 Mar 2016 23:43:14 +0000 (16:43 -0700)]
Move replication_status update to init_with_rpc
We were using host_init to read replication_status from the
driver and update the service entry in the DB. It turns out
that on a fresh install this doesn't actually work, because
while we have multiple init methods for the backend, the
Service entry isn't actually created in a fresh deploy until
AFTER init_host. The result was that in some cases we were
trying to update a column on a non-existent Service in the
DataBase.
This patch moves the replication_status updates for the
service into the init_with_rpc method. That method was
just a noop stub in the parent manager class, so we just
implement it in cinder.volume.manager and do what we need
with the replication update info.
In order to allow a cloud operator to use the policy based
user access control for these operations, a context during
these operations should be elevated before db operations.
After applying this change, the cloud operator can manage
policy for volume type operations like this.
1. To permit volume type operations for specific user,
add "storage_type_admin" role.
2. Add "admin_or_storage_type_admin" rule to policy.json.
"admin_or_storage_type_admin":
"is_admin:True or role:storage_type_admin",
3. Modify rule for types_manage.
"volume_extension:types_manage":
"rule:admin_or_storage_type_admin",
lisali [Wed, 9 Mar 2016 02:57:48 +0000 (02:57 +0000)]
Host selection in backup service
With scaling backup service, it introduces a config called
backup_use_same_backend which indicates whether customers use
same backup driver in their environments. If the value is
set to True, Cinder selects any backup host to do backup task. If
the value is False, cinder can only select backup.host for a backup
task. Currently the default value is set to False.
This patch is to change above config to backup_use_same_host which
means whether selecting same host for tasks of a backup. The
default value is set to False. It indicates that we don't care
about backup.host when running backup task, and choose any available
host in same az.
As currently Cinder doesn't support multiple backup drivers, the change
is to make the config more sensible.
Michał Dulko [Wed, 9 Mar 2016 15:04:20 +0000 (16:04 +0100)]
Add volume_type to volume object expected_attrs
We haven't had volume_type in expected_attrs for volume objects lists.
This resulted in situation in which although we were joining the
volume_type explicitely in DB API, the object just dropped the data.
Volume type was then lazy loaded when needed, so every "cinder list"
call was making additional DB queries per returned volume, causing
massive performance drop.
Actually there were two unnecessary DB calls per volume, because not
only volume_type was fetched, but also volume_type.extra_specs as a result
of passing 'extra_specs' in expected_attrs when calling
VolumeType._from_db_volume in Volume._from_db_volume (wow, that's
complicated…).
This commit sorts this out by adding volume_type to expected_attrs to
match what we join in the DB. Please note that I'm not adding
consistencygroup and volume_attachment on purpose - addition causes some
unit tests failure and that late in the release it seems risky to try
fixing that. The changes also required minor rework of expected_attrs
infrastructure in the o.vo layer to be able to pass different values
when we query for just a single volume and when we fetch whole list (as
we're doing different joins in the DB layer in both cases).
Joshua Huber [Thu, 10 Mar 2016 01:15:36 +0000 (20:15 -0500)]
Pass new volume size when cloning (blockbridge)
When creating a cloned volume (using the Blockbridge driver), pass the
new volume capacity to the backend. After a successful clone, the
backend extends the new volume to the requested size.
Update unit test to ensure capacity param is supplied to the backend
API.
Vincent Hou [Tue, 1 Mar 2016 19:26:52 +0000 (14:26 -0500)]
Storwize: Update replication to v2.1
This patch updates replication to match the v2.1 spec. This makes
it possible to replicate an entire backend, and upon failover, all
replicated volumes will be failed over together.
cinder.conf should have the replication config group:
The replication can be configured via either multi-backend on one
cinder volume node, or on separate cinder volume nodes.
Options to be put in cinder.conf, where the primary back-end is
located:
Patrick East [Wed, 9 Mar 2016 19:08:27 +0000 (11:08 -0800)]
Switch failover-host from rpc call to cast
There is some concern that with large numbers of volumes it will be
difficult for drivers to failover the host before the rpc timeout hits.
To avoid asking admins to bump the timeout just for these cases we can
switch it to do a non-blocking cast instead of call. The difference now
being that the active_backend_id is not returned from the API call to
failover-host. An admin will have to look at the service-list output
to see when it has changed states from ‘failing-over’ and then check
what its active_backend_id is at that time.
Nate Potter [Wed, 4 Nov 2015 15:45:11 +0000 (15:45 +0000)]
Show qos_specs_id based on policy
Right now qos_specs_id is only shown to an admin user
when showing a volume type. This patch changes that to
be based on policy to allow for more flexibility. It
also adds unit tests for showing a volume type
with policy permissions for qos_specs_id as well as
extra_specs.
Jay S Bryant [Tue, 8 Mar 2016 11:22:39 +0000 (05:22 -0600)]
Remove remaining oslo-incubator code from Cinder
Most of the parts for oslo-incubator have moved to
oslo libraries. Cinder only had imageutils.py and
_i18n.py remaining in cinder/openstack/common. I
was to able remove both of these remaining incubator
components.
oslo_utils now has imageutils available. I moved
cinder/image/image_utils.py to use the new
imageutils from oslo_utils and updated all the other
places that were using the old imageutils import.
_i18n was only being used in scheduler/base_filter.py
which appears to have been accidentally left when the
scheduler was previously pulled out of incubator. I
fixed this oversight.
Finally, this patch deletes cinder/openstack/common
cinder/openstack and the openstack-common.conf file.
Danny Al-Gaaf [Tue, 8 Mar 2016 15:43:15 +0000 (16:43 +0100)]
Pass RBD order to clone call
For cloning of a RBD the rbd_store_chunk_size information from
the cinder.conf should be used to calculate and pass the correct
order information to the clone() call of the rbd library.
Added new test to check for correctly from rbd_store_chunk_size
calculated order while cloning.
Change-Id: Ic5714d3e0d6961bce6ff588006661618130dca07 Co-Authored-By: Logan V <logan2211@gmail.com> Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
Wilson Liu [Mon, 22 Feb 2016 05:03:26 +0000 (13:03 +0800)]
Huawei: Check before delete host
Currently we delete the host without checking
whether the host already belongs to a host
group. If a host already belongs to a hostgroup,
an error will occur. So we should do the check
before delete it.