Zhongyue Luo [Tue, 27 Aug 2013 02:04:47 +0000 (10:04 +0800)]
Import order cleanup
According to
https://github.com/openstack-dev/hacking/blob/master/HACKING.rst#imports
imports must be ordered by the full module path by the below template
{{stdlib imports in human alphabetical order}}
\n
{{third-party lib imports in human alphabetical order}}
\n
{{project imports in human alphabetical order}}
\n
\n
{{begin your code}}
Michael Kerrin [Tue, 27 Aug 2013 10:48:45 +0000 (10:48 +0000)]
Migrate manage script needs import of db session module
This import looks like it is not used but importing this module as
the side effect of defining the olso configuration. Without it the
script will not work.
Avishay Traeger [Mon, 19 Aug 2013 10:45:38 +0000 (13:45 +0300)]
Migration for attached volumes
Enable migration for attached volumes by calling Nova to have the
hypervisor copy the data (Nova's swap_volume feature). A new API
function is added for Nova to call once it is done copying to finish
the Cinder side of the migration.
The overall 'generic' migration flow (i.e., not performed by a driver)
is as follows:
1. Creates a new volume
2a. If the source volume's state is 'available', attach both volumes
and run 'dd' to copy, then detach both. When finished, call the
migrate_volume_completion function.
2b. If the source volume's state is 'in-use', call Nova to perform the
copy. Nova will attach the new volume, copy the data from the
original volume to the new one, and detach the original volume.
When the copy completes, Nova will call Cinder's new
migrate_volume_completion function.
3. The migrate_volume_completion function deletes the original volume,
and calls the database API's finish_volume_migration function. This
copies all of the new volume's information to the original, and
deletes the new volume's row, and thus we can keep using the
original volume_id (the user sees no change).
We also don't change the original volume's status, and instead add a
migration_status column which only selected users can see (e.g., admin).
The migration status is None when no migration is in progress, whether
it succeeded or failed. The admin should check the volume's current
host to determine success or failure. This is meant to simplify
operations. The user will only be aware of a migration if they try to
change the volume's state during the course of a migration.
As mentioned, we change the volume while keeping the original volume
ID. Because a volume's name depends on its ID, the new volume will have
a different name than the original. This is the purpose of the name_id
field in the database - the name is now based on name_id. So although
we keep the original volume's ID, we use the new volume's ID as the
name_id. Thus we can remove the rename_volume function - it is no
longer necessary because the name_id field in the database already
allows for the volume's name on the backend to not rely on its ID.
The user than can see the migration_status can also see the
name_id, in case they need to find it on the backend.
There were a few other places throughout the code that relied on
constructing a volume's name from its ID, and those were fixed.
Add an optimized migration path for when the source and destination are
managed by the same storage. If the source and destination pools have
the same extent_size, it will use the optimal migratevdisk command.
Otherwise, it will use addvdiskcopy and rmvdiskcopy, which must be
watched until the copies are sync'ed.
Zhongyue Luo [Mon, 19 Aug 2013 07:39:00 +0000 (15:39 +0800)]
Removes exception instance creation on execute()
For every excption raised in utils.execute(), a new instance is created.
This patch removes this redundant step and directly uses exceptions
defined in Oslo.
John Griffith [Tue, 27 Aug 2013 00:52:02 +0000 (00:52 +0000)]
Fix except in lvm.py
Consolidation of brick exceptions into a single
brick exception.py file neglected to updated files
that were using it:
(see change I577d403f02c2fd0d727d2694776008e1e9791453)
The result is the lvm driver was doing an except on lvm.VolumeGroupNotFound
which didn't exist anymore as it's in brick/exception now.
We still need to clean up exceptions all through here, and this still
isn't handled by service init anyway, but at least with this we are
raising a valid exception.
John Griffith [Mon, 26 Aug 2013 18:23:49 +0000 (12:23 -0600)]
Add automated check of conf sample
This adds check_uptodate to run_tests and
tox.ini checks.
This also requires that we add novaclient to the
test-requires, since we try and poke at cinder/compute/nova.
There is another implementation in use to do this that
pulls in openstack.common.config.generator,
(see ceilometer project for a pretty slick impl), however
adding that also pulls in a rather large list of
updates and dependencies that quite frankly Cinder
doesn't need and the ones that we do use break
a number of things in Cinder.
There's also a number of additional options added
to generate_sample that I don't think we need right now.
In the future we'll probably want to go this route
but for now I'd just like to get automated checking
in place.
John Griffith [Sat, 24 Aug 2013 16:23:29 +0000 (10:23 -0600)]
Sync up with global requirements
This is a pull from openstack/requirements.
Note that pep8, pyflakes and flake8 are now specified
as deps in hacking, so we can remove those here. Also
if we don't remove them then we run into version conflicts
when hacking tries to set things up.
John Griffith [Sat, 24 Aug 2013 15:57:40 +0000 (09:57 -0600)]
Remove print statement in db api test
Remove the encryption related print statements in
test_dp_api, this should bring us up to speed
now for updated requirements and take care of the
py3.3 incompat print issues.
Zhongyue Luo [Mon, 19 Aug 2013 04:03:35 +0000 (12:03 +0800)]
Sync execute() related exceptions with oslo
Cinder has its own ProcessExecutionError exception defined which
already exists in processutils.py. This was done to have minimal
editing when applying processutils.execute() to Cinder. As a result,
new instances are being created for each exception raised during
cli processing using execute()
As for the first step to remove the redundant excptions, this
patch syncs ProcessExecutionError and UnknownArgumentError
with that in processutils to ease the transition.
Michael Kerrin [Fri, 23 Aug 2013 14:42:46 +0000 (14:42 +0000)]
The DB migration shouldn't populate types table
It is up to the system adminisator to do this via an API call.
Also fixes up a few cases of dict() being used rather than the prefered {}
(https://github.com/openstack-dev/hacking/blob/master/HACKING.rst#dictionarieslists)
Avishay Traeger [Thu, 22 Aug 2013 20:17:57 +0000 (23:17 +0300)]
Storwize/SVC: allow setting of I/O group
Allow setting the I/O group via config option or volume type. Also
updated comments and checks regarding iSCSI multipath, which is in Nova
and is not controlled by the driver.
Zhiteng Huang [Sat, 18 May 2013 14:21:28 +0000 (22:21 +0800)]
Implement QoS support for volumes
This patch is to implement generic Quality-of-Service(QoS) support for volumes.
The goal is to add an interface so that cloud/Cinder admins can use to set
volume QoS, which can be enforced either in hypervisor or on Cinder back-end
or both. QoS specifications are added as a standalone (only visible to admin)
entity. So admin can create/update/delete and associate/disassociate QoS
specifications to other entities, in this case volume types.
Note that while it's possible for Cinder to set the granularity of QoS control
to every single volume, this patch puts the control granularity to the level
of volumes of the same type to minimize the impact of other Cinder parts.
In other words, the design is to bond QoS with volume types. So Cinder admin
can associate volume types with QoS specifications, and volumes of same volume
type share the same QoS specifications.
QoS can mean a lot different things that it's unlikely we can come up with a
interpretation that all vendors can agree on. So the approach this
implementation takes is to make Quality-of-Service specs as free-from, i.e.
expressed as key/value pairs.
Changes:
- Add a quality_of_service_specs table, using adjacency list relation to store
a specs entry and its detailed specs in key/values. Note that to be able to
distinguish where should the QoS specs be consumed, each QoS specs entity
will have a 'consumer' (i.e. fixed key) with the value of where admin would
like the QoS policy to be enforced/consumed, currently these three values are
considered valid: 'front-end' (Nova Compute), 'back-end' (Cinder back-end),
'both'. The default value for 'consumer' is 'back-end';
- Add a new API extension 'qos_specs_manage' to allow list/create/update/
delete/associate/disassociate of QoS specs;
- Add volume/qos_specs internal API for qos specs manipulation;
- Add 'qos_specs' info to data structure when
initialize_connection() is called.
- Add 'qos_specs' to request_specs and filter properties for
a volume create request.
TODO
- Modify 'type_manage' API extension to be able to accept qos info.
- Modify volume_types.create() to accept qos info and do the checks.
Joshua Harlow [Wed, 21 Aug 2013 00:47:12 +0000 (17:47 -0700)]
Move the frequently injection task to the base folder.
Key/value injection is a typical way to bootstrap a
dependency based system (as there needs to be an
initial set of values to depend on). Since this inject
task is frequently used to accomplish this we should
just move it to the base file.
Ronen Kat [Thu, 8 Aug 2013 09:12:37 +0000 (12:12 +0300)]
Backup driver for IBM Tivoli Storage manager (TSM)
An implementation of Cinder backup driver using TSM as a backend for
Cinder backups. The driver is a wrapper for the TSM command line
utility dsmc, and uses TSM image backup and restore.
John Griffith [Wed, 21 Aug 2013 19:05:30 +0000 (13:05 -0600)]
Dont crash service if sf cluster isnt available
Currently if the SolidFire driver is configured but the cluster
isn't available for some reason the ensure_export call will raise
an unhandled exception and crash the volume-service.
We should be able to handle things like loosing connectivity to
a single back-end without impacting the other volume-service
backends.
We'll wrap the ensure_export call in a try block here and
return None in the case that the connection can't be made.
This will keep the service from crashing and log an error
message that the connection timed out.
Additional work would include adding a periodic retry task
to the manager to try and start the backend service for us
on some regular interval incase the device comes back.
Kurt Martin [Thu, 22 Aug 2013 16:28:37 +0000 (09:28 -0700)]
3PAR driver add missing domain check on QOS cmd
The 3PAR drivers need to support CPGs that are not part of a
virtual domain on the 3PAR backend. This patch now checks
if the domain is set before calling the createvvset command.
This instance was overlooked in the following patch that has
already been merged: https://review.openstack.org/#/c/42697/
Julia Varlamova [Mon, 12 Aug 2013 08:49:25 +0000 (12:49 +0400)]
Refactor cinder/tests/test_volume.py
Refactor test_volume.py to remove code duplication.
- remove iterant definition of the 'expected' dict from
'test_create_delete_volume' and 'test_create_delete_snapshot'
method;
- move 'test_copy_volume_to_image_status_available',
'test_copy_volume_to_image_status_use' and
'test_copy_volume_to_image_exception' methods to new class
CopyVolumeToImageTestCase;
- move 'test_volume_get_active_by_window' and
'test_snapshot_get_active_by_window' to new class
GetActiveByWindowTestCase;
- create base class BaseVolumeTestCase for TestCase classes.
bp cinder-tests-improvement
This patch adds some cinder utils functions
that wrap calls into brick to automatically
populate cinder's root wrap helper. This
prevents propogating the recreation of the
sudo cinder-rootwrap string.
John Griffith [Mon, 12 Aug 2013 21:50:53 +0000 (21:50 +0000)]
Fix python 3 pep8 errors for print
The new auto-pulled requirements files exposed
python 3 compat issues in a number of modules.
We added these to tox.ini ignore temporarily,
this change updates the print routines in bin/cinder
and removes the debug cruft that was in the unit tests.
Will update requirements and test-requirements
appropriately in a follow up patch.
Jay S. Bryant [Wed, 21 Aug 2013 21:39:04 +0000 (16:39 -0500)]
Fix incorrect msgstr's to avoid translation errors
There are numerous msgstr's in the cinder.po files for en_US,
en_AU and en_GB that are incorrect translations of the associated
msgid. It appears that there are a number that are the result of
problems with past fuzzy translations gone wrong and some may also be the
result of copy/paste mistakes. Regardless of the origin it makes debug and
development very difficult if the string that is coming out in the
logs doesn't match what is expected. This patch fixes the numerous
bad msgstr's.
Once this commit is merged, Transifex will catch the change to
the Cinder po files and then spin a new Transifex Translations import.
Bill Owen [Fri, 16 Aug 2013 22:49:06 +0000 (15:49 -0700)]
GPFS use clone_image for creating volumes
If both source and target of gpfs create volume from image operation
are backed by gpfs storage use clone_image method for implementing
the move of image data to the new volume.
The copy_image_to_volume method is used only if this is not true,
and uses image_utils.fetch_to_raw to move image data to the new
volume.
Kurt Martin [Wed, 21 Aug 2013 16:38:54 +0000 (09:38 -0700)]
3PAR driver terminate connection host validation
The 3PAR backend does not allow FQDN host names (i.e. foo.rose.hp.com),
instead it requires just foo without the rose.hp.com. This patch will
now validate the host name in terminate connection by calling
_safe_host just as it was doing in initialize connection when creating
the host on the 3PAR backend.
Michael Kerrin [Wed, 14 Aug 2013 10:16:42 +0000 (10:16 +0000)]
Re-enable a lot of cinder scheduler tests
This highlight and fixes a NoneType exception in the schedulers
_get_weighted_candidates method
This was originally added to get around the fact that the filter tests
required that cinder was correctly installed on the system. But the
is_cinder_installed condition always returning False and this hid
problems of old packages been installed in our system. Also 930f5891b0815e1b49b9b2cc840e0c24b2796e84 added the setup.py automatically
to run_tests.sh so we can remove this.
See https://review.openstack.org/#/c/20213/ for history.
Kui Shi [Wed, 21 Aug 2013 11:07:43 +0000 (19:07 +0800)]
emit warning while running flake8 without virtual env
run_tests.sh -N -p
it will call the flake8 installed on your host to detect PEP8, and
the flake8 plugin "OpenStack hacking" may not installed on your
host, so this command may not detect the OpenStack Style Commandment
supplied by hacking(e.g H202).
run_tests.sh -p
it will call the flake8 from virtual env, flake8 plugin "OpenStack
hacking" installed in virtual env will be triggered.
The result from "run_tests.sh -p" should be trusted, and jenkins uses
virtual env to run flake8 too.
When "-N" is enabled, emit warning to remind user.
xiaoxi_chen [Mon, 12 Aug 2013 11:25:59 +0000 (19:25 +0800)]
Set bootable flag for volume cloned from image
In previous code we don't set the bootable flag for volume
cloned from image. This does not appear to break booting
from volume at present but the status displayed by the cinder
client is broken.
The bug is reported and fixed several months ago(bug #1185533)
but the author doesn't provide unit test with the patch. Now it
has been broken again by somebody else.The patch re-fixed this
bug together with unit tests.
Also add a function named _handle_bootable_volume_glance_meta,
it is a combination of enable_bootable_flag and glance_metadata
handling. There are 3 kinds of volume creation tasks may required
to copy/fetch the glance metadata. In previous code every kind of
task have its own handling code, this patch consolidate them together
to share some common code.