Introduces a volume driver which makes use of SMB shares for hosting
volumes as disk images, having a similar workflow with the other NFS
like drivers.
This driver is based on the SMB Volume driver proposed for Linux,
overriding the platform specific methods.
It includes all the features required by the Juno release.
Various commands were failing when running multiple copies of
cinder-volume. This is due to storing an in memory copy of the luns
which may get updated in the backend by another cinder-volume. Fix
it by attempting to refresh the list of luns if one is not found
during a request.
Introduces a volume driver which makes use of SMB shares for
hosting volumes as disk images, having a similar workflow with the
other NFS like drivers.
The driver uses Samba in order to access the SMB shares. It takes
SMB shares information such as share path, credentials and other
mount flags from a config file of which location can be configured
in the cinder config file.
It includes all the features required by the Juno release.
Given the non-blocking nature of RPC cast, what happens in volume
manager and volume driver can happen in parallel with the DB update.
If volume driver relies on original DB record to do things, then
DB record shouldn't be updated until volume driver finishes its job.
So this patch change volume RPC API accept_transfer() from cast
to call to make sure the workflow is in serialized manner. Also
elevated the context when volume manager tries to update the DB
record when driver has done accept_transfer().
This patch changes the glance client version unit tests
to mock the Client object completely. Previously the
tests were ensuring the right version of the client was
returned, but that required too much knowledge of glance's
implementation and ended up breaking the tests when glance
changed the implementation details of the Client class.
The new code tests if cinder is calling the integration
point correctly for the version, rather than if glance
is correctly returning the right client; that should be
a glance test, not a cinder test.
Zhi Kun Liu [Fri, 5 Sep 2014 08:51:30 +0000 (16:51 +0800)]
remove object in wsgi LOG.info
Method __init__ in Server class records log for wsgi server name, host
and port using its __dict__ which includes a socket object. i18n message
will deep copy each item's value in __dict__. In python2.6, deep copy
the socket object will raise "Exception RuntimeError" and can not be
caught. This makes cinder-api run into a hang loop. This patch uses the
related properties instead of __dict__ object to fix the problem.
The configuration option gpfs_storage_pool is assigned with
default value of None. And it gets initialized with None
instead of system, where as the intention is to set the default
value as system.
This change sets default value of gpfs_storage_pool to system
instead of None.
DocImpact
Needs an update in gpfs driver documentation, as this patch
proposes a change in default value of configuration option
(gpfs_storage_pool)
The method copy_volume_to_image from windows.py specifies the
wrong volume format, namely 'vpc'. In this case, the upload_volume
method from image_utils will attempt to convert the volume to vhd
which will result in an error from qemu as it does not recognize
the format.
This patch fixes this issue by using 'vhd' as a format when
uploading volumes
Kurt Martin [Tue, 2 Sep 2014 23:56:53 +0000 (16:56 -0700)]
Update the HP 3PAR default persona
The HP 3PAR drivers was setting the default persona to Generic.
The default should actually be Generic-ALUA for Linux based host.
DocImpact
This will require an update in the OpenStack Configuration Reference
Guide under the supported operations page for the HP 3PAR drivers.
The new default for the hp3par:persona should be '2 - Generic-ALUA'.
ling-yun [Sat, 2 Aug 2014 06:43:35 +0000 (14:43 +0800)]
Support Volume Backup Quota
Since quota take volumes, snapshots and gigabytes into account, it also
needs to take backup into account.
1. Backup create API is not admin interface, users of projects could
create any number of backups.
2. If some evil users create many more big backups to exhaust the free
space of backup storage-backend, it would cause cinder-backup in the state
of rejecting service.
Based on these two points, I think it is necessary to support volume
backup quota.
Jun Ishizaki [Tue, 5 Aug 2014 01:21:45 +0000 (10:21 +0900)]
Add Fujitsu ETERNUS DX support
This driver had them correspond to FUJITSU ETERNUS DX.
This driver supports ETERNUS DX S2 and the S3 series.
The driver runs volume operations by communicating with the backend
FUJITSU storage. It uses a CIM client in Python called PyWBEM to perform
CIM operations over HTTP.
Support protocols
-Fibre Channel
-iSCSI
Support functions
-create/delete volume
-create/delete snapshot
-attach/detach volume
-list snapshots
-create volume from snapshot
-create volume from image
-create volume from volume
-create image from volume
-extend volume
-volume migration(host assisted)
driver-cert result
fc : https://bugs.launchpad.net/cinder/+bug/1359467
iscsi : https://bugs.launchpad.net/cinder/+bug/1359569
Zhiteng Huang [Sat, 23 Aug 2014 18:32:57 +0000 (18:32 +0000)]
Pool-aware Scheduler Support
This change introduces pool-aware scheduler to address the need for
supporting multiple pools from one storage controller.
Terminology
-----------
Pool - A logical concept to describe a set of storage resource that
can be used to serve core Cinder requests, e.g. volumes/snapshots.
This notion is almost identical to Cinder Volume Backend, for it
has simliar attributes (capacity, capability). The main difference
is Pool couldn't exist on its own, it must reside in a Volume
Backend. One Volume Backend can have mulitple Pools but Pools
don't have sub-Pools (meaning even they have, sub-Pools don't get
to exposed to Cinder, yet). Pool has a unique name in backend
namespace, which means Volume Backend can't have two pools using
same name.
Legacy Volume - Volumes that were created prior pools are introduced.
There are several corner cases where legacy volumes could cause
issues, especially for those drivers used to do pools internally
(e.g. 3Par, NetApp). Please refer to 'Limitation/Known Issues' for
details.
Design
------
The workflow in this change is simple:
1) Volume Backends reports how many pools and what those pools
look like and are capable of to scheduler;
2) When request comes in, scheduler picks a pool that fits the need
most to serve the request, it passes the request to the backend
where the target pool resides in;
3) Volume driver gets the message and let the target pool to serve
the request as scheduler instructed.
To support placing resources (volume/snapshot) onto a pool, these
pieces in Cinder currently are missing:
1. Volume Backends reporting capacity/capabilities at pool level;
2. Scheduler filtering/weighing based on pool capacity/capability
and placing volumes/snapshots to a pool of certain backend;
3. Record which pool a resource is located on a backend and passes
between scheduler and volume backend.
The missing piece 1 is solved by a) updating the format of
periodical volume stats message to adopt pool stats; b) altering
manager/driver to collect and report pool stats. Below is an example
of the updated report message that contains 2 pools:
Notice that there are now two levels of mandatory/fixed stats &
capabilities that every volume driver should report.
The scheduler change is mostly done in scheduler/host_manager.py:
* HostState adds a list element to hold PoolState(s) (a HostState sub-
class). Each PoolState can be treated like a HostState since it has
as much information as HostState and also share the same structure.
HostState.update_capabilities()/update_from_volume_capability() are
changed to handle both legacy and new report messages.
* HostManager.get_all_host_states() now returns a PoolState iterator
that includes all pools scheduler tracks. To filters and weighers,
PoolState() and HostState() are identical, thus there is no need to
change filters and weighers since they are dealing with same kind of
information and exact same data strucuture as before. What filter
and weigher deals with looks like this:
# Before this change
HostState() for Backend 1
...
HostState() for Backend N
# After this change
PoolState() for Backend 1 Pool 1
...
PoolState() for Backend 1 Pool N
...
PoolState() for Backend N Pool 1
...
PoolState() for Backend N Pool N
With this change, filter scheduler will pick a pool@host instead of a
host.
Now that we are able to report and decide at pool level, the 3rd
missing piece is easy to fix. Just like how multi-backend
volume service is supported, we encoded pool name into 'host' field
of Volume table. The 'host' field is now 'host@backend#pool'. Notice
that this change doesn't mean that cinder-volume service will have to
subscribe to multiple RPC channels. There is no need to mess with
message queue subscription at all because we did a little trick when
determining RPC target in VolumeRPCAPI: correct host info like
'host@backend' is extracted from 'host@backend#pool' before sending
RPC messages. Therefore, once scheduler decides which pool on a
backend shall serve a request, it updates 'host' field of the volume
record in DB to be like 'host@backend#pool', but it still sends RPC
message to 'host@backend' which cinder-volume is listening to.
Simliar action is done when creating backups for volumes. host@backend
is extracted from volume['host'] so that correct backup service can be
picked.
Other changes are done in this patch:
* Change get_volume_stats() in ISCSIVolumeDriver/ISERDriver to include
pool stats, and change default total/free_capacity_gb from 'infinite'
to 0.
* Add logic in volume manager init_host() to detect legacy volumes and
try to update host info for them if driver is able to provide pool info.
* Add get_pool() driver API to return the pool name of given volume,
this help volume manager to handle legacy volumes, especially for those
backends already support pooling internally (e.g. 3Par, NetApp).
* Implement get_pool() for LVM driver to return volume backend name as
pool name.
* Add extract_host() helper function in cinder/volume/utils.py to help
handle cases where there is needs to extract 'host', 'backend' or
'pool' information from volume['host'].
* Add append_host() helper function in cinder/volume/utils.py to help
concatenate host and pool string into one for volume['host'] field.
Limitation/Known Issues
-----------------------
* The term 'host' in Cinder used to refer to 'backend', and it was
consistent from the view of end users/admins to Cinder internals.
Now that pool is exposed to Cinder scheduler, scheduler starts treating
different pools on same backend as different hosts. Therefore, we have
to expose pool to admin at least, because migrating volume now has to
include pool in 'host' parameter in order to work. As for end users,
the whole 'host' of volume equals storage backend idea works well for
them, so they can decide the policy of migration when retyping volumes,
or choose to create new volume on same or different host as/from
existing volumes. Now it's *not* easy to hide pool from end user and
make retype or affinity filter work like before. This change has a
speical code path for legacy volumes, to allow (potential) migration
between pools even migration_policy is set to 'never'. But not every
driver has magic to move volumes to one pool to another at minimum
cost. The inconsistency behavior between drivers (same command may
take totally different time to finish), which could be very confusing.
* Drivers want to support pools need to update, but they should work
just like they used to without any change except:
- creating volume using same/different host hints with legacy volumes
may NOT work as expected, because 'hostA' is considered different
from 'hostA#pool0' and 'hostA#pool1'. But legacy volume on 'hostA'
might actually resides in pool0, only the driver has this knowledge.
- retyping legacy volume issue as mentioned above.
Ultimate solution for all these corner cases is to update Cinder DB
to add 'pool' info for legacy volumes. The problem is only the driver
knows such info, that is why we add a new driver API get_pool() to
so that volume manager is able to learn pool info from driver and
update host field of legacy volumes in DB.
User-Visible Change
-------------------
DocImpact
For managing and migrating volumes, now user needs to provide pool
information as part of host string. For example:
cinder manage --source-name X --name newX host@backend#POOL
cinder migrate UUID host@backend#POOL
Vincent Hou [Wed, 23 Jul 2014 04:06:38 +0000 (00:06 -0400)]
Add QoS support to IBM Storwize driver
IBM Storwize driver can be enabled with QoS support by setting the parameter
I/O throttling rate, which caps the amount of I/O.
This patch add the QoS configuration to the methods of create_volume,
create_volume_from_snapshot, create_cloned_volume and retype.
The QoS for IBM Storwize storage can be configured in 3 ways:
* Add the key "qos:IOThrottling" into a QoS spec and associate this QoS
spec with a volume type.
* Add the key "qos:IOThrottling" into the extra spec of a volume type.
* Add a metadata with the key "qos:IOThrottling".
Ed Balduf [Wed, 7 May 2014 19:04:14 +0000 (13:04 -0600)]
Driver for Fusion-io ioControl Hybrid array
Standard Cinder driver for iSCSI target array.
Uses REST API of the ioControl array.
Implements Quality of Service through the 5 Policies avaliable on the array.
Add Apache License block
Test results: https://bugs.launchpad.net/cinder/+bug/1317248
Jay S. Bryant [Thu, 14 Aug 2014 20:03:10 +0000 (15:03 -0500)]
Make ssh-host-key-policy configurable
This patch adds configuration options for ssh_hosts_key_file and
strict_ssh_host_key_policy. You can set strict_ssh_host_key_policy
to 'True' or 'False'. If set to false the first connection of a host
will cause it to be added to the known_hosts file. Subsequent connections
will be enforced against the existing key. Changes in the key are assumed
to be a Man-in-the-Middle attack so the connection is rejected.
If strict_ssh_host_key_policy is 'True' the key for the host that is
being connected to must be in the hosts_key_file. No first connection
assumptions are made.
strict_ssh_host_key_policy is set to 'False' to keep functionality similar
to the existing functionality.
With this patch, a default of $state_path/ssh_known_hosts is used for the
known_hosts file. Unlike the previous approach, this now requires the
user to have a known_hosts file that is writable, somewhere. The option
is configurable if they don't want to use $state_path/ssh_known_hosts
DocImpact: Need to document the new strict_ssh_host_key_policy as well
as the ssh_hosts_key_file. A note should be made for drivers that may
pass a hosts_key_file via kwargs when creating an ssh pool: their file
will be loaded along with the file configured via /etc/cinder.conf.
Also worth noting, for development environments, an ssh_hosts_key_file of
/dev/null and a strict_ssh_host_key_policy setting of 'False' may be used.
Using those setting will ignore these changes.
John McDonough [Wed, 2 Apr 2014 22:47:06 +0000 (18:47 -0400)]
Add Cisco FC Zoning plugin to the FC ZoneManager
The Cisco FC Zoning plugin allows for the automated creation,
deletion and modification of zones in zonesets. The Cisco
FC zoning plugin supports both basic and enhanced zoning and
supports vsans.
2b966f9 Fix deletion of cached file for policy enforcer 8202a96 Merge "Make policy debug logging less verbose" 238e601 Make policy debug logging less verbose 9c88dc3 file_open: fixed docstring to refer to open() instead of file() 6c7407b fileutils: port to Python 3 fe3389e Improve help strings 33a2cee save_and_reraise_exception: make logging respect the reraise parameter 15722f1 Adds a flag to determine whether to reload the rules in policy dacc065 Merge "Update oslo log messages with translation domains" 5d1f15a Documenting policy.json syntax fcf517d Update oslo log messages with translation domains b59cfd9 Merge "Allow policy.json resource vs constant check" e038d89 Fix policy tests for parallel testing 0da5de6 Allow policy.json resource vs constant check e4b2334 Replaces use of urlutils with six in policy module e71cd1a Merge "Trivial: Make vertical white space after license header consistent" 8b2b0b7 Use hacking import_exceptions for gettextutils._ 6d0a6c3 Correct invalid docstrings 6fa29ae Trivial: Make vertical white space after license header consistent 0d8f18b Use urlutils functions instead of urllib/urllib2 12bcdb7 Remove vim header 9ef9fec Use six.string_type instead of basestring 4bfb7a2 Apply six for metaclass 1538c80 ConfigFileNotFoundError with proper argument 477bf7a Add utils for creating tempfile 33533b0 Keystone user can't perform revoke_token d602070 Merge "excutils: replace unicode by six.u" 2ad95e4 parameterize fileutils removal functions d3b6e97 excutils: replace unicode by six.u e35e166 excutils: use six.reraise to re-raise 14ba138 Merge "Fix wrong argument in openstack common policy" 64bb5e2 Fix wrong argument in openstack common policy b7edc99 Fix missing argument bug in oslo common policy 96d1f88 Merge "BaseException.message is deprecated since Python 2.6" f58c936 Merge "Fix policy default_rule issue" 3626b6d Fix policy default_rule issue df3f2ba BaseException.message is deprecated since Python 2.6 7bf8ee9 Allow use of hacking 0.6.0 and enable new checks d74ac1d Merge "Fix missing argument bug in oslo common policy" e4ac367 Fix missing argument bug in oslo common policy 1a2df89 Enable H302 hacking check 323e465 Add conditional exception reraise 22ec8ff Make AMQP based RPC consumer threads more robust 7119e29 Enable hacking H404 test. 4246ce0 Added common code into fileutils and strutils. 21ee25f Add common code for fileutils. 6d27681 Enable H306 hacking check. 1091b4f Reduce duplicated code related to policies a514693 Removes len() on empty sequence evaluation fde1e15 Convert unicode for python3 portability e700d92 Replaces standard logging with common logging 65e3d8c update OpenStack, LLC to OpenStack Foundation 547ab34 Fix Copyright Headers - Rename LLC to Foundation 9e5912f Fix pep8 E125 errors. 6d102bc Provide i18n to those messages without _() 9a8c1d7 Move nova's util.synchronized decorator to openstack common. f182936 Merge "Revert "Add support for finer-grained policy decisions"" 76751a6 Revert "Add support for finer-grained policy decisions" 8b585cb Remove an unneeded 'global' 3fc4689 Add support for finer-grained policy decisions 21b69d8 Add a 'not' operator to the policy langage fa7dc58 Add a new policy language 8c6e7a7 Remove deprecated policy engine APIs
Change-Id: Iddca4243d312c9cd768588753af49dde068d5e4b Co-Authored-By: Jordan Pittier <jordan.pittier@cloudwatt.com>
XtremIO Direct Driver has been contributed to Juno release.
support all minimum required features by Juno for both iSCSI and FibreChannel
Certificate Test Results
https://bugs.launchpad.net/cinder/+bug/1336844
Xing Yang [Wed, 27 Aug 2014 03:53:34 +0000 (23:53 -0400)]
Consistency Groups
This patch enables Consistency Groups support in Cinder.
It will be implemented for snapshots for CGs in phase 1.
Design
------------------------------------------------
The workflow is as follows:
1) Create a CG, specifying all volume types that can be supported by this
CG. The scheduler chooses a backend that supports all specified volume types.
The CG will be empty when it is first created. Backend needs to report
consistencygroup_support = True. Volume type can have the following in
extra specs: {'capabilities:consistencygroup_support': '<is> True'}.
If consistencygroup_support is not in volume type extra specs, it will be
added to filter_properties by the scheduler to make sure that the scheduler
will select the backend which reports consistency group support capability.
This will add a cgsnapshot entry in the new cgsnapshots table, create
snapshot for each volume in the CG, and add a cgsnapshot_id foreign key
in each newly created snapshot entry in the db.
Reverts the changes to cinder/volume/manager.py added in
commit b868ae707f9ecbe254101e21d9d7ffa0b05b17d1 as calling
remove_export in terminate_connection causes Nova live
migration to fail when volumes are attached.
Anthony Lee [Thu, 21 Aug 2014 22:57:06 +0000 (15:57 -0700)]
Fixing 3PAR excessive FC port usage
Updating the 3PAR FC driver so that it can detect if there is
only a single FC path available. When a single FC path is
detected only a single VLUN will be created instead of one for
every available NSP on the host. This will prevent a host
from using extra FC ports that are not needed. If multiple
FC paths are available all the ports will still be used.
Jay S. Bryant [Tue, 26 Aug 2014 16:24:54 +0000 (11:24 -0500)]
Sync latest strutils from oslo-incubator
We have had a request from Tristan Cacqueray to pull in the latest
strutils. The request is driven by a need to get improvements in
place that make strutils.mask_password more secure.
Jay S. Bryant [Mon, 25 Aug 2014 18:09:14 +0000 (13:09 -0500)]
Mock processutils.execute properly in test_ibmnas
test_delete_snapfiles and test_delete_snapfiles_nas_gpfs were
not properly mocking out cinder.openstack.common.processutils.execute .
This was causing the unittests to prompt for a sudo password during
unit test on some systems.
This change mocks out processutils.execute to return what is expected
for this test case.
The default cache implementation in suds store pickled objects in a
predictable path in /tmp which could lead to attacks. This patch
turns off suds caching to address this security issue.
Flavio Percoco [Fri, 22 Aug 2014 16:33:07 +0000 (18:33 +0200)]
Pass an empty context to the notifier
The notifier API expects a context as a first parameter. During the port
to oslo.messaging, a bug was introduced here and the first argument in
the notifier call was removed. This patch puts it back.
Juan Zuluaga [Thu, 10 Jul 2014 17:15:27 +0000 (13:15 -0400)]
Add Oracle ZFS Storage Appliance ISCSI Driver
ZFSSA ISCSI Driver is designed for ZFS Storage Appliance product
line (ZS3-2, ZS3-4, ZS3-ES, 7420 and 7320).
It uses REST API to communicate out of band with the storage controller
to perform the following:
* Create/Delete Volume
* Extend Volume
* Create/Delete Snapshot
* Create Volume from Snapshot
* Delete Volume Snapshot
* Attach/Detach Volume
* Get Volume Stats
* Clone Volume
Update cinder.conf.sample to include ZFS Storage Appliance
properties.
Certification test results:
https://bugs.launchpad.net/cinder/+bug/1356075
Ronen Kat [Sat, 26 Jul 2014 14:06:52 +0000 (17:06 +0300)]
Add support in Cinder for volume replication - driver approach
This is take #2 for managing replicaiton in Cinder.
This patch provides the foundation in Cinder to make volume
replication available to the cloud admin. It makes Cinder aware
of volume replicas, and allows the cloud admin to define storage
policies (volume types) that will enable replication.
In this version Cinder delegates most the work on replication
to the driver itself.
This includes:
1. Driver exposes replication capabilities via volume type convention.
2. Extend volume table to include columns to support replicaion.
3. Create replicas in the driver, making it transparant to Cinder.
4. Volume manager code to handle API, updates to create_volume to
support creating test replicas.
5. Driver methods to expose per replication functions
Cinder-specs available at https://review.openstack.org/#/c/98308/
Volume replication use-case: Simplified disaster recovery
The OpenStack cloud is deployed across two metro distance data centers.
Storage backends are available in both data ceneters. The backends
are managed by either a single Cinder host or two, depending on the
storage backend requirements.
Storage admin configures the Cinder volume driver to support
replication.
Cloud admin creates a volume type "replicated" with extra-specs:
capabilities:replication="<is> True"
Every volume created in type "replicated" has a copy on both
backends.
In case of data center failure in first data center, the cloud admin
promotes the replica, and redeploy the VMs - they will now run on
a host in the secondary data center using the storage on the
secondary data center.
Xing Yang [Thu, 3 Jul 2014 21:50:45 +0000 (17:50 -0400)]
EMC VMAX Driver Juno Update
This driver is an enhancement from the EMC SMI-S driver.
In Juno, VNX support will be removed from this driver.
Moving forward, this driver will support VMAX only.
The following features are added for VMAX:
* Extend volume
* Create volume from snapshot
* Dynamically creating masking views, storage groups,
and initiator groups
* Striped volumes
* FAST policies
Tempest test results from CI system:
https://bugs.launchpad.net/cinder/+bug/1337840
Tomoki Sekiyama [Thu, 21 Aug 2014 15:13:54 +0000 (11:13 -0400)]
Revert test_rootwrap_filter to avoid python2.6 test failure
With oslo.rootwrap 1.3.0.0a1 which has a bug #1340792, that some filter
rules are dependent on its evaluation order, test_rootwrap_filter randomly
fail for a patch adding to etc/cinder/rootwrap.d/volume.filters with
python2.6, as evaluation order is randomly changes.
The bug in oslo.rootwrap is fixed in master branch, but is not yet
released and not available in CI. So this until new oslo.rootwrap is
released, this patch reverts the unit test to avoid to block reviews.
Boris Pavlovic [Sun, 29 Jun 2014 16:03:28 +0000 (20:03 +0400)]
Integrate OSprofiler and Cinder
*) Add osprofiler wsgi middleware
This middleware is used for 2 things:
1) It checks that person who want to trace is trusted and knows
secret HMAC key.
2) It start tracing in case of proper trace headers
and add first wsgi trace point, with info about HTTP request
*) Add initialization of osprofiler at start of serivce
Set's olso.messaging notifer instance (to send notifications to Ceilometer)
Zhiteng Huang [Tue, 19 Aug 2014 14:27:26 +0000 (22:27 +0800)]
Honor volume:get policy
The fix for bug 1356368 hard-coded a policy check (same as
rule:admin_or_owner) for volume:get. While in most cases this is
what people want, it'd be good we honor policy setting.
Note that before commit 0505bb268942534ad5d6ecd5e34a4d9b0e7f5c04,
DB query volume_get() actually acted as the policy checker for
volume:get, and it raised VolumeNotFound if context.project_id didn't
match volume['project_id']. The check_policy() in volume:get didn't
get a chance to raise PolicyNotAuthorized exception. So in this
change we keep the same behavor.