Joel Coffman [Tue, 4 Aug 2015 19:12:22 +0000 (15:12 -0400)]
Correct comment to be consistent with code
The volume_encryption_metadata_get function returns more than just
the encryption key id. This change generalizes the comment to refer
to all the metadata that is returned.
In vCenter inventory, volumes are organized under the folder:
<datacenter_vm_folder>/<volume_folder>, where <volume_folder>
is the value of driver config option 'vmware_volume_folder'.
Due to this organization certain project-specific maintenance
tasks (using vSphere client) might be difficult for vCenter
admin. For example, the admin may need to migrate the volumes
of a particular project from one vCenter datastore to another.
This patch changes the folder hierarchy to:
<datacenter_vm_folder>/OpenStack/Project
(<project_id>)/<volume_folder>
for better organization.
DocImpact
Modified default value of config option
'vmware_volume_folder' and updated its description.
Eric Brown [Sat, 15 Aug 2015 18:36:33 +0000 (11:36 -0700)]
Use min and max on IntOpt option types
Latest version of oslo.config support the ability to set valid
range on IntOpt option values. This path makes use of that feature
for a number of options with well known minimum and maximum values.
Patrick East [Tue, 11 Aug 2015 21:08:29 +0000 (14:08 -0700)]
Implement Clone CG in Pure Volume Drivers
This adds support for the consisgroup-create-from-src method specifying
a source consistency group.
The driver will create a new Purity Protection Group, take a temporary
snapshot of the source Protection Group, create new volumes from the
consistent snapshot, and then add these new volumes into the newly
created protection group.
Jacob Gregor [Thu, 13 Aug 2015 20:31:16 +0000 (15:31 -0500)]
Reduced file size to prevent timeout
We currently have a test system that appears to be I/O constrained
and is hitting timeouts. Rather than mocking out the file copy, we
decided to reduce the file size in order to avoid timeouts. This way
we are not changing the way these test cases work, but we are reducing
the time required to run each test case. This patch will help improve
check and gate performance as well.
Joseph Vokt [Fri, 10 Jul 2015 20:10:40 +0000 (13:10 -0700)]
3PAR: Adding performance metrics to volume status
If one storage backend has high load while another has lower load,
when a user/admin creates a volume, the current evaluator/scheduler
may place the volume on the backend with higher load. In many cases,
the evaluator/scheduler should place that volume on the backend
with the lowest load for load balancing purposes. In general, for load
balancing virtual volume placement, an evaluator/scheduler needs
access to performance metrics (e.g. IOPS throughput, bandwidth,
latency, IO size) to measure load, but Cinder drivers don't generally
report those metrics.
Although these performance metrics are available from the 3PAR CLI,
they are currently inaccesible in the evaluator/scheduler. The
proposed changes add these metrics to the volume status report and
thus make them accesible by the evaluator/scheduler.
In cinder/volume/drivers/san/hp/hp_3par_common.py, uses the 3PAR
client method HP3parClient.getCPGStatData to get performance metrics,
and adds new fields in the output of HP3PARCommon.get_volume_stats
corresponding to the performance metrics: IOPS throughput, bandwidth,
latency, io_size, queue_length, and avg_busy_perc. These metrics are
taken by averaging samples taken once per day for the last week.
In cinder/tests/unit/test_hp3par.py, adds unit tests in
TestHP3PARFCDriver.test_get_volume_stats and in
TestHP3PARISCSIDriver.test_get_volume_stats.
Kuo-tung Kao [Tue, 28 Jul 2015 09:44:57 +0000 (17:44 +0800)]
Don't use context.elevated to get volume
Original Problem:
=================
Since the metadata(readonly and attached_mode) is stored in admin metadata,
normal user need `run context.elevated` to retrieves admin metadata.
The above way will bring a side effect. Normal user can also get
any volume which the user shouldn't access when the user knows the UUID.
Solution:
=========
Use context instead of context.elevated to get volume.
And add admin metadata to it.
Based on cinder-meetup-summer-2015 conclusion, we use the solution.
The solution will need extra database connection.
Kendall Nelson [Mon, 20 Jul 2015 20:06:21 +0000 (15:06 -0500)]
Enable cinder-manage to remove services
These changes are to enable cinder-manage to remove old services
from the database like nova does. All that is being added is the remove
method to manage.py and tests to cover this new method in test_cmd.py.
The command will remove the binary from the list of running services
in the db that can be viewed via "cinder-manage service list". NOTE:
Active and restarted services will re-populate the DB with their current
status automatically.
To use this new command, the user types "cinder-manage service remove
<binary> <host>" and the service will be removed.
Tom Barron [Wed, 5 Aug 2015 23:19:37 +0000 (17:19 -0600)]
Fix backup init_host volume cleanup
Backup service manager init_host method cleans up leftover volumes from
interrupted backup and restore operations. As the code is currently
written, it only does this for volumes with attachments.
This commit runs the volume cleanup for volumes in backing-up or restoring
state even when they do not have attachments.
Log a warning if the vCenter server version is less than 5.1.
The plan is to enforce the version in M release.
DocImpact
Running Cinder with a VMware vCenter version less than 5.1 is
deprecated. The minimum required version of vCenter server will
be raised to 5.1 in the 8.0.0 release.
Gorka Eguileor [Wed, 12 Aug 2015 17:09:53 +0000 (19:09 +0200)]
On Volume list only retrieve needed data from DB
Currently when there is no limit set on a volume list query we retrieve
all volumes and then limit them locally using osapi_max_limit. Similar
thing happens when we are using the marker for next pages, we get all
volumes from that marker until the last volume and then limit it
locally.
We should be limiting it on the DB side so we only retrieve the data we
are actually going to return to the API caller.
This patch always limits the data retrieved from the DB and for the
offset to keep working as it was before we need to do the offset on the
DB side as well.
For reference some tests were performed:
On a deployment with 60,000 volumes, 370,000 volume_metadata items and
240,000 volume_glance_metadata items in cinder db. Before the patch
this will use nearly 10G memory. With the patch we will just use about
500M.
This patch changes the way we handle volume detach
attempts when the attachment_id is already detached
and/or when the volume has no attachments. We now
handle this the same way we do with deleting volumes
that don't exist. We return success.
This patch also takes care to make sure we safely reset
the volume status to what it should be depending on if
there are other attachments. If the attachment_id is
passed in and that attachment is detached, but there are
other attachments, we want to make sure that the volume is
left in an in-use state, not available.
Eric Harney [Tue, 28 Jul 2015 14:19:07 +0000 (10:19 -0400)]
Remove RetypeVD class, fix NFS driver retype
retype() only exists in the deprecated VolumeDriver() class
and not BaseVD. This means that drivers inheriting from *VD
hit attribute errors when trying to call driver.retype().
The thinking here is that RetypeVD isn't really necessary,
since all drivers must implement some version of the retype
method (even if it's a no-op with just a standard return code).
Therefore, just rely on this being in the base driver class(es)
rather than making it a separate abc class.
Michal Dulko [Thu, 13 Aug 2015 12:45:15 +0000 (14:45 +0200)]
Add objects.register_all() to cinder-all command
In cmd/all.py registration of objects was missing. This caused running
cinder-all command to raise exceptions about unknown objects. Following
commit fixes this problem.
Sasikanth [Thu, 25 Jun 2015 10:32:29 +0000 (16:02 +0530)]
GPFS volume encryption-at-rest support
This patch extends the capability of gpfs driver to support volume encryption
at rest using GPFS Native Encryption feature (https://ibm.biz/BdXPBm).
This includes:
1. Verification of configured gpfs backend to support encryption at rest.
2. Driver exposes gpfs_encryption_rest capability via volume type convention.
Usage - Cloud admin creates a volume type "encrypted" with extra-specs
gpfs_encryption_rest=True
Every volume created using type "encrypted", the volumes will be encrypted
at rest.
DocImpact
Needs an update in gpfs driver documentation, as this patch introduces
encrypted backend support.
The symlinks in /dev/disk/by-id/ (in a Linux guest OS running in VMware
ESX based Nova instance) use UUID of virtual disk as the SCSI device ID
when disk.EnableUUID is set to True in the virtual machine configuration
file. This patch sets the UUID of virtual disk corresponding to a Cinder
volume (at the time of volume creation) to volume ID so that end-users
(of Nova instance) can identify the device corresponding to an attached
volume in the guest using /dev/disk/by-id symlink.
For example, given a Cinder volume with ID = xyz, user can identify the
device corresponding to the volume in instance's guest using the symlink
/dev/disk/by-id/scsi-xyz provided disk.EnableUUID is set to True in the
instance's virtual machine configuration file (in ESX).
Note: If the UUID of the virtual disk corresponding to a volume is unset
at the time of volume creation, a random UUID will be used.
There are test cases that require oslo.vmware and the change made in the
commit for tox doesn't cover the case where run_tests.sh is used. Without
including oslo.vmware in the test-requirements file there are a number of
test cases that fail without the library installed.
This patch resolves the issue by move oslo.vmware over to the
test-requirements file.
Eric Harney [Mon, 16 Jun 2014 19:43:51 +0000 (15:43 -0400)]
LVM Thin Provisioning auto-detect
Add the ability to set lvm_type=auto, which will
enable thin provisioning if the system supports it,
and the configured volume group either:
a) has a pool LV with the expected name or
b) has no LVs
This, along with thin-provisioning accounting, will
give a path to move toward thin LVM as the default.
Erlon R. Cruz [Tue, 4 Aug 2015 13:18:48 +0000 (10:18 -0300)]
Fix HNAS iSCSI 32 targets limitation error
When attaching more than 32 targets to an HNAS iSCSI backend, the
storage returns an error as there's a limitation on the number of
volumes that can be attached to a target. This patch fixes the
problem and creates new targets as needed.
HP data protector's backup of vCenter inventory is failing
for 'in-use' Cinder volumes because the vmdk is attached
to two VMs (Nova instance and the backing VM corresponding
to Cinder volume). The solution is to skip the backing VM
during data protector's backup.
This patch adds key 'cinder.volume.id' with value set to
volume UUID in volume's vCenter config file so that other
vCenter solutions can uniquely identify Cinder volumes.
Daniel Wilson [Tue, 4 Aug 2015 18:27:06 +0000 (11:27 -0700)]
Enhance PureISCSIDriver multipath support
Add in support for alternative iSCSI portal information. We can also
return our whole list of connected ports for multipathing. This allows
us to not do any iSCSI discovery in the driver code since all portals
will be tried now in the connector code.
Skyler Berg [Tue, 4 Aug 2015 21:44:58 +0000 (14:44 -0700)]
Fix argument order for assertEqual in tests
Some tests incorrectly use the order assertEqual(observed, expected)
instead of the correct order, assertEqual(expected, observed). This
patch corrects this issue.