- Implemented APIs: create_volume, delete_volume, initialize_connection,
create_snapthot, delete_snapshot, create_volume_from_snapshot,
create_cloned_volume
- Modified etc/cinder/cinder.conf.sample adding details for the driver
- Added suds dependency in requirements.txt
- create_volume: does no work. Volume's backing is created lazily
- delete_volume: delete backing if present
- initialize_connection: if backing, not present, create backing, else
provide backing details to nova to perform attach
Nova BP/vmware-nova-cinder-support
- create_snapshot: Creates the snapshot of the backing
- delete_snapshot: Deletes the snapshot of the backing
- create_volume_from_snapshot: Creates a full/linked clone from the snapshot
point in VC. In ESX, copies the VM backing files, registers and reverts to
the appropriate snapshot point.
- create_cloned_volume: Creates a full/linked clone in VC. In ESX, copies the
VM backing files and registers as a new backing.
- Written appropriate unit tests
- Work item in BP/vmware-vmdk-cinder-driver
Kurt Martin [Wed, 28 Aug 2013 21:08:49 +0000 (14:08 -0700)]
Add missing LH SAN driver features for Havana
The LeftHand driver did not support snapshot or extend volume.
The following features are part of the minimum driver features
that should be implemented in every cinder driver. This patch
adds the following missing features;
create snapshot
delete snapshot
create volume from snapshot
extend volume
It also fixes the DocStrings in the unit test so they pass the
checks.
Julia Varlamova [Thu, 29 Aug 2013 07:41:34 +0000 (11:41 +0400)]
Add kwargs to create_volume in tests/utils.py
Add availability_zone parameter and kwargs to make create_volume
function more flexible in order to use this function in
unit tests.
The goal is to remove inner _create_volume function in:
- db/test_transfers.py
- api/contrib/test_backups.py
- test_gpfs.py
- test_volume_transfer.py
- test_volume.py
and to use create_volume from tests/utils instead.
Zhi Yan Liu [Fri, 19 Jul 2013 22:18:24 +0000 (06:18 +0800)]
Adding Read-Only volume attaching support to Cinder
1. Adding an API extension to allow clients set volume Read-Only flag on
demand.
2. Require client to provide and be aware of volume attaching mode when
they call 'os-attach' API.
3. Adding a 'access_mode' field to connection info which
'os-initialize_connection' API returned. This field should be used by
client such as Nova to use correct mode accessing attached volume.
Currently access mode can be 'rw' or 'ro'.
4. In future, the driver within Cinder need to ensure the volume be
exposed under the correct access mode which connection info described,
for example backend should set volume to readonly mode when connection
info ask client using 'ro' access mode consume attached volume. That
means Read-Only is not only a attaching mode but also a status for a
volume.
blueprint read-only-volumes
Change-Id: I4c84614d6541d5f7c358abadb957da7b8c3d9c48 Signed-off-by: Zhi Yan Liu <zhiyanl@cn.ibm.com>
Ben Swartzlander [Wed, 28 Aug 2013 04:43:20 +0000 (21:43 -0700)]
Add NFS/GlusterFS support to brick library
Refactor NFS driver to use the brick library to perform mounts
rather than duplicating the mount code. Some of the NFS unit
test code was rewritten to test the brick implementation because
the old mount code was moved.
The Gluster driver was slightly affected in the refactoring, but
does not yet use brick to perform attaches.
Joshua Harlow [Wed, 21 Aug 2013 01:06:32 +0000 (18:06 -0700)]
Move create_volume flow to a subfolder
In order to start splitting up the create_volume flow into
smaller files we will start by moving the create_volume.py
file to a folder with just __init__.py and this will then
act as the base for create_volume task modules.
Also add in a utils file that contains to start the commonly
used debug listener attachment mechanism.
Zhongyue Luo [Tue, 27 Aug 2013 02:04:47 +0000 (10:04 +0800)]
Import order cleanup
According to
https://github.com/openstack-dev/hacking/blob/master/HACKING.rst#imports
imports must be ordered by the full module path by the below template
{{stdlib imports in human alphabetical order}}
\n
{{third-party lib imports in human alphabetical order}}
\n
{{project imports in human alphabetical order}}
\n
\n
{{begin your code}}
Michael Kerrin [Tue, 27 Aug 2013 10:48:45 +0000 (10:48 +0000)]
Migrate manage script needs import of db session module
This import looks like it is not used but importing this module as
the side effect of defining the olso configuration. Without it the
script will not work.
Avishay Traeger [Mon, 19 Aug 2013 10:45:38 +0000 (13:45 +0300)]
Migration for attached volumes
Enable migration for attached volumes by calling Nova to have the
hypervisor copy the data (Nova's swap_volume feature). A new API
function is added for Nova to call once it is done copying to finish
the Cinder side of the migration.
The overall 'generic' migration flow (i.e., not performed by a driver)
is as follows:
1. Creates a new volume
2a. If the source volume's state is 'available', attach both volumes
and run 'dd' to copy, then detach both. When finished, call the
migrate_volume_completion function.
2b. If the source volume's state is 'in-use', call Nova to perform the
copy. Nova will attach the new volume, copy the data from the
original volume to the new one, and detach the original volume.
When the copy completes, Nova will call Cinder's new
migrate_volume_completion function.
3. The migrate_volume_completion function deletes the original volume,
and calls the database API's finish_volume_migration function. This
copies all of the new volume's information to the original, and
deletes the new volume's row, and thus we can keep using the
original volume_id (the user sees no change).
We also don't change the original volume's status, and instead add a
migration_status column which only selected users can see (e.g., admin).
The migration status is None when no migration is in progress, whether
it succeeded or failed. The admin should check the volume's current
host to determine success or failure. This is meant to simplify
operations. The user will only be aware of a migration if they try to
change the volume's state during the course of a migration.
As mentioned, we change the volume while keeping the original volume
ID. Because a volume's name depends on its ID, the new volume will have
a different name than the original. This is the purpose of the name_id
field in the database - the name is now based on name_id. So although
we keep the original volume's ID, we use the new volume's ID as the
name_id. Thus we can remove the rename_volume function - it is no
longer necessary because the name_id field in the database already
allows for the volume's name on the backend to not rely on its ID.
The user than can see the migration_status can also see the
name_id, in case they need to find it on the backend.
There were a few other places throughout the code that relied on
constructing a volume's name from its ID, and those were fixed.
Add an optimized migration path for when the source and destination are
managed by the same storage. If the source and destination pools have
the same extent_size, it will use the optimal migratevdisk command.
Otherwise, it will use addvdiskcopy and rmvdiskcopy, which must be
watched until the copies are sync'ed.
Zhongyue Luo [Mon, 19 Aug 2013 07:39:00 +0000 (15:39 +0800)]
Removes exception instance creation on execute()
For every excption raised in utils.execute(), a new instance is created.
This patch removes this redundant step and directly uses exceptions
defined in Oslo.
John Griffith [Tue, 27 Aug 2013 00:52:02 +0000 (00:52 +0000)]
Fix except in lvm.py
Consolidation of brick exceptions into a single
brick exception.py file neglected to updated files
that were using it:
(see change I577d403f02c2fd0d727d2694776008e1e9791453)
The result is the lvm driver was doing an except on lvm.VolumeGroupNotFound
which didn't exist anymore as it's in brick/exception now.
We still need to clean up exceptions all through here, and this still
isn't handled by service init anyway, but at least with this we are
raising a valid exception.
John Griffith [Mon, 26 Aug 2013 18:23:49 +0000 (12:23 -0600)]
Add automated check of conf sample
This adds check_uptodate to run_tests and
tox.ini checks.
This also requires that we add novaclient to the
test-requires, since we try and poke at cinder/compute/nova.
There is another implementation in use to do this that
pulls in openstack.common.config.generator,
(see ceilometer project for a pretty slick impl), however
adding that also pulls in a rather large list of
updates and dependencies that quite frankly Cinder
doesn't need and the ones that we do use break
a number of things in Cinder.
There's also a number of additional options added
to generate_sample that I don't think we need right now.
In the future we'll probably want to go this route
but for now I'd just like to get automated checking
in place.
John Griffith [Sat, 24 Aug 2013 16:23:29 +0000 (10:23 -0600)]
Sync up with global requirements
This is a pull from openstack/requirements.
Note that pep8, pyflakes and flake8 are now specified
as deps in hacking, so we can remove those here. Also
if we don't remove them then we run into version conflicts
when hacking tries to set things up.
John Griffith [Sat, 24 Aug 2013 15:57:40 +0000 (09:57 -0600)]
Remove print statement in db api test
Remove the encryption related print statements in
test_dp_api, this should bring us up to speed
now for updated requirements and take care of the
py3.3 incompat print issues.
Zhongyue Luo [Mon, 19 Aug 2013 04:03:35 +0000 (12:03 +0800)]
Sync execute() related exceptions with oslo
Cinder has its own ProcessExecutionError exception defined which
already exists in processutils.py. This was done to have minimal
editing when applying processutils.execute() to Cinder. As a result,
new instances are being created for each exception raised during
cli processing using execute()
As for the first step to remove the redundant excptions, this
patch syncs ProcessExecutionError and UnknownArgumentError
with that in processutils to ease the transition.
Michael Kerrin [Fri, 23 Aug 2013 14:42:46 +0000 (14:42 +0000)]
The DB migration shouldn't populate types table
It is up to the system adminisator to do this via an API call.
Also fixes up a few cases of dict() being used rather than the prefered {}
(https://github.com/openstack-dev/hacking/blob/master/HACKING.rst#dictionarieslists)
Avishay Traeger [Thu, 22 Aug 2013 20:17:57 +0000 (23:17 +0300)]
Storwize/SVC: allow setting of I/O group
Allow setting the I/O group via config option or volume type. Also
updated comments and checks regarding iSCSI multipath, which is in Nova
and is not controlled by the driver.
Zhiteng Huang [Sat, 18 May 2013 14:21:28 +0000 (22:21 +0800)]
Implement QoS support for volumes
This patch is to implement generic Quality-of-Service(QoS) support for volumes.
The goal is to add an interface so that cloud/Cinder admins can use to set
volume QoS, which can be enforced either in hypervisor or on Cinder back-end
or both. QoS specifications are added as a standalone (only visible to admin)
entity. So admin can create/update/delete and associate/disassociate QoS
specifications to other entities, in this case volume types.
Note that while it's possible for Cinder to set the granularity of QoS control
to every single volume, this patch puts the control granularity to the level
of volumes of the same type to minimize the impact of other Cinder parts.
In other words, the design is to bond QoS with volume types. So Cinder admin
can associate volume types with QoS specifications, and volumes of same volume
type share the same QoS specifications.
QoS can mean a lot different things that it's unlikely we can come up with a
interpretation that all vendors can agree on. So the approach this
implementation takes is to make Quality-of-Service specs as free-from, i.e.
expressed as key/value pairs.
Changes:
- Add a quality_of_service_specs table, using adjacency list relation to store
a specs entry and its detailed specs in key/values. Note that to be able to
distinguish where should the QoS specs be consumed, each QoS specs entity
will have a 'consumer' (i.e. fixed key) with the value of where admin would
like the QoS policy to be enforced/consumed, currently these three values are
considered valid: 'front-end' (Nova Compute), 'back-end' (Cinder back-end),
'both'. The default value for 'consumer' is 'back-end';
- Add a new API extension 'qos_specs_manage' to allow list/create/update/
delete/associate/disassociate of QoS specs;
- Add volume/qos_specs internal API for qos specs manipulation;
- Add 'qos_specs' info to data structure when
initialize_connection() is called.
- Add 'qos_specs' to request_specs and filter properties for
a volume create request.
TODO
- Modify 'type_manage' API extension to be able to accept qos info.
- Modify volume_types.create() to accept qos info and do the checks.
Joshua Harlow [Wed, 21 Aug 2013 00:47:12 +0000 (17:47 -0700)]
Move the frequently injection task to the base folder.
Key/value injection is a typical way to bootstrap a
dependency based system (as there needs to be an
initial set of values to depend on). Since this inject
task is frequently used to accomplish this we should
just move it to the base file.
Ronen Kat [Thu, 8 Aug 2013 09:12:37 +0000 (12:12 +0300)]
Backup driver for IBM Tivoli Storage manager (TSM)
An implementation of Cinder backup driver using TSM as a backend for
Cinder backups. The driver is a wrapper for the TSM command line
utility dsmc, and uses TSM image backup and restore.
John Griffith [Wed, 21 Aug 2013 19:05:30 +0000 (13:05 -0600)]
Dont crash service if sf cluster isnt available
Currently if the SolidFire driver is configured but the cluster
isn't available for some reason the ensure_export call will raise
an unhandled exception and crash the volume-service.
We should be able to handle things like loosing connectivity to
a single back-end without impacting the other volume-service
backends.
We'll wrap the ensure_export call in a try block here and
return None in the case that the connection can't be made.
This will keep the service from crashing and log an error
message that the connection timed out.
Additional work would include adding a periodic retry task
to the manager to try and start the backend service for us
on some regular interval incase the device comes back.
Kurt Martin [Thu, 22 Aug 2013 16:28:37 +0000 (09:28 -0700)]
3PAR driver add missing domain check on QOS cmd
The 3PAR drivers need to support CPGs that are not part of a
virtual domain on the 3PAR backend. This patch now checks
if the domain is set before calling the createvvset command.
This instance was overlooked in the following patch that has
already been merged: https://review.openstack.org/#/c/42697/