Nilesh Bhosale [Sun, 24 Aug 2014 07:36:29 +0000 (13:06 +0530)]
IBMNAS: Remove call to set r/w permissions to all
During cinder volume create operation from a volume snapshot or
from an existing volume (volume clone operation), the ibmnas
driver sets 'rw' permissions to all, which is unnecessary and
also poses security concerns.
Fixing this issue, removing the calls to set rw permissions to
all during these operations and adding a call to set 'rw'
permissions only to the owner to make sure even if umask is set
at the filesystem level, which might deny 'rw' access to the owner
we explicitely set the required permissions on the volume file.
Xing Yang [Sun, 14 Sep 2014 23:50:01 +0000 (19:50 -0400)]
Getting iscsi_ip_address from cinder.conf
In the current version (2.0) of the VMAX driver, we retrieve iSCSI IP
addresses dynamically from SMI-S. However, we ran into situations where
we can't get them reliably during testing. The fix is to get this
information from cinder.conf, just like in version 1.0.
Clinton Knight [Fri, 15 Aug 2014 15:12:14 +0000 (11:12 -0400)]
Netapp drivers support for pool-aware scheduling
Adds pools support for all NetApp drivers: eseries, 7mode (iscsi and nfs), and
cmode (iscsi and nfs). With 7mode and cmode drivers, a pool is one-to-one with a
Ontap flexvol. With eseries, a pool is one-to-one with a dynamic disk pool.
ling-yun [Tue, 26 Aug 2014 10:39:30 +0000 (18:39 +0800)]
Add reset-state function for backups
Since there are volume reset-state function and snapshot reset-state
function, backup also needs reset-state as well.
When creating or restoring backup, it may leave the backup stuck in
creating or restoring status due to database down or rabbitmq down, etc.
Currently we could only solve these problems by restarting cinder-backup
service.
This BP is to provide another means for administrators to solve these
problems by calling backup reset state API, instead of directly
restarting cinder-backup service.
Mike Perez [Thu, 4 Sep 2014 04:05:53 +0000 (21:05 -0700)]
Destroy Datera export target after detach
Since there is only one lun associated with each target export object in
the Datera backend, we will destroy the target export object after
detaching the volume so they don't build up. This patch also fixes some
missing values for formatted strings.
The copy_volume_to_image requires in some cases an intermediary
conversion, which takes place in the configured image conversion
dir. If this folder is not created, the operation will fail.
This patch ensures that method will create the folder if it does
not exist.
Jay S. Bryant [Wed, 10 Sep 2014 15:18:22 +0000 (10:18 -0500)]
Sync service.py and its dependencies to Cinder
Bug 1362455 made note that our sevice.py was out of sync
from what is currently in oslo-incubator. We have been syncing
things piece meal. It would be good to get this code synced up.
This commit pulls in the following changes listed by file,
ordered from newest to oldest:
cinder/openstack/common/eventlet_backdoor.py
------------------------------------------------ 90ae24bf Remove redundant default=None for config options fcf517d7 Update oslo log messages with translation domains
cinder/openstack/common/jsonutils.py
------------------------------------------------ d9060f69 Check for namedtuple_as_object support before using it ad248f66 Specify namedtuple_as_object=False when using simplejson
cinder/openstack/common/log.py
------------------------------------------------ 037dee00 Set stevedore log level to WARN by default
cinder/openstack/common/loopingcall.py
------------------------------------------------ e3773930 Changes calcuation of variable delay ab5d5f1c Use timestamp in loopingcall bc48099a Log the function name of looping call fb4e863c Remove deprecated LoopingCall fcf517d7 Update oslo log messages with translation domains
cinder/openstack/common/service.py
------------------------------------------------ 5f12bba4 Implement stop method in ProcessLauncher 8c00b361 Merge "Remove duplicated "caught" message" f718e404 Remove duplicated "caught" message 356ecd4d Move notification point to a better place e2634a75 Add missing _LI for LOG.info in service module 0150ad75 Merge "Reap child processes gracefully if greenlet thread gets killed" 53e1214c notify calling process we are ready to serve a6fec438 Reap child processes gracefully if greenlet thread gets killed fcf517d7 Update oslo log messages with translation domains 1e700782 Revert "service: replace eventlet event by threading" 0644073e Simplify launch method 6b4d255e Merge "service: replace eventlet event by threading" a4f145e4 Merge "Allow configurable ProcessLauncher liveness check"
cinder/openstack/common/threadgroup.py
------------------------------------------------ 5a1a0166 Make stop_timers() method public fdc88831 Add graceful stop function to ThreadGroup.stop 5f8ace05 Merge "threadgroup: use threading rather than greenthread" 2d06d6ca Simple typo correction 4d18b57a threadgroup: use threading rather than greenthread
cinder/openstack/common/systemd.py
------------------------------------------------- 17c4e21e Fix docstring indentation in systemd 667d1bac Fixed spelling error - occured to occurred 53e1214c notify calling process we are ready to serve
Mark Sturdevant [Tue, 2 Sep 2014 23:25:47 +0000 (16:25 -0700)]
HP 3PAR configurable ssh-host-key-policy
Cinder added configurable SSH strict key policy (RejectPolicy)
and configurable known_hosts_file. This patch configures the
hp3parclient (used by the 3PAR drivers) using those settings.
Xing Yang [Fri, 5 Sep 2014 21:48:00 +0000 (17:48 -0400)]
Delete consistency group failed
Delete CG failed because it compares host@backend with host@backend#pool
(in delete_consistencygroup in volume/manager.py). The fix is to call
extract_host before doing the comparison.
Another issue is about deleting a CG with no host. This will throw exception
when extract_host(group['host']) is called in delete_consistencygroup in
volume/rpcapi.py. The solution is to check the host field in consistencygroup/
api.py and delete it from db there.
Introduces a volume driver which makes use of SMB shares for hosting
volumes as disk images, having a similar workflow with the other NFS
like drivers.
This driver is based on the SMB Volume driver proposed for Linux,
overriding the platform specific methods.
It includes all the features required by the Juno release.
Various commands were failing when running multiple copies of
cinder-volume. This is due to storing an in memory copy of the luns
which may get updated in the backend by another cinder-volume. Fix
it by attempting to refresh the list of luns if one is not found
during a request.
Introduces a volume driver which makes use of SMB shares for
hosting volumes as disk images, having a similar workflow with the
other NFS like drivers.
The driver uses Samba in order to access the SMB shares. It takes
SMB shares information such as share path, credentials and other
mount flags from a config file of which location can be configured
in the cinder config file.
It includes all the features required by the Juno release.
Given the non-blocking nature of RPC cast, what happens in volume
manager and volume driver can happen in parallel with the DB update.
If volume driver relies on original DB record to do things, then
DB record shouldn't be updated until volume driver finishes its job.
So this patch change volume RPC API accept_transfer() from cast
to call to make sure the workflow is in serialized manner. Also
elevated the context when volume manager tries to update the DB
record when driver has done accept_transfer().
This patch changes the glance client version unit tests
to mock the Client object completely. Previously the
tests were ensuring the right version of the client was
returned, but that required too much knowledge of glance's
implementation and ended up breaking the tests when glance
changed the implementation details of the Client class.
The new code tests if cinder is calling the integration
point correctly for the version, rather than if glance
is correctly returning the right client; that should be
a glance test, not a cinder test.
Zhi Kun Liu [Fri, 5 Sep 2014 08:51:30 +0000 (16:51 +0800)]
remove object in wsgi LOG.info
Method __init__ in Server class records log for wsgi server name, host
and port using its __dict__ which includes a socket object. i18n message
will deep copy each item's value in __dict__. In python2.6, deep copy
the socket object will raise "Exception RuntimeError" and can not be
caught. This makes cinder-api run into a hang loop. This patch uses the
related properties instead of __dict__ object to fix the problem.
Andrew Kerr [Fri, 5 Sep 2014 15:28:06 +0000 (10:28 -0500)]
Don't clear _mounted_shares list in remoteFS while updating
This fix makes the updating of the _mounted_shares list in
remoteFS more of an atomic operation. Previously this list
would be cleared, then rebuilt. That allowed a race
condition where operations that took place during this
update would have a list of 0 shares to work with.
liyingjun [Wed, 6 Aug 2014 21:58:17 +0000 (05:58 +0800)]
Add filter to volume transfer REST api
The transfer api supports filter param,
https://github.com/openstack/cinder/blob/master/cinder/transfer/api.py#L72
But the transfer REST api didn't pass the filter params when calling get_all.
The configuration option gpfs_storage_pool is assigned with
default value of None. And it gets initialized with None
instead of system, where as the intention is to set the default
value as system.
This change sets default value of gpfs_storage_pool to system
instead of None.
DocImpact
Needs an update in gpfs driver documentation, as this patch
proposes a change in default value of configuration option
(gpfs_storage_pool)
The method copy_volume_to_image from windows.py specifies the
wrong volume format, namely 'vpc'. In this case, the upload_volume
method from image_utils will attempt to convert the volume to vhd
which will result in an error from qemu as it does not recognize
the format.
This patch fixes this issue by using 'vhd' as a format when
uploading volumes
Extend virtual disk API is called during volume creation from image if
the image size is less than volume size. In the case of streamOptimized
and sparse vmdk images, the size of the virtual disk created from the
image can be greater than image size and equal to the volume size. For
example, streamOptimized image created from a new 1GB volume has size
69120 bytes and the size of the virtual disk created from this image
is 1GB. Therefore, relying on image size to invoke extend API might
result in VIM API fault, if the virtual disk size is same as the target
size (volume size). The fix is to read the current virtual disk size to
decide whether extend disk needs to be invoked or not.
Currently, backup-create and backup-restore operations are not
supported for volumes created by the VMDK driver. This change
implements backup_volume and restore_backup APIs for VMDK driver.
DocImpact
Added a new config option 'vmware_tmp_dir' which specifies
the directory in Cinder host to use for storing temporary
virtual disk files during backup/restore.
Kurt Martin [Tue, 2 Sep 2014 23:56:53 +0000 (16:56 -0700)]
Update the HP 3PAR default persona
The HP 3PAR drivers was setting the default persona to Generic.
The default should actually be Generic-ALUA for Linux based host.
DocImpact
This will require an update in the OpenStack Configuration Reference
Guide under the supported operations page for the HP 3PAR drivers.
The new default for the hp3par:persona should be '2 - Generic-ALUA'.
ling-yun [Sat, 2 Aug 2014 06:43:35 +0000 (14:43 +0800)]
Support Volume Backup Quota
Since quota take volumes, snapshots and gigabytes into account, it also
needs to take backup into account.
1. Backup create API is not admin interface, users of projects could
create any number of backups.
2. If some evil users create many more big backups to exhaust the free
space of backup storage-backend, it would cause cinder-backup in the state
of rejecting service.
Based on these two points, I think it is necessary to support volume
backup quota.
Jun Ishizaki [Tue, 5 Aug 2014 01:21:45 +0000 (10:21 +0900)]
Add Fujitsu ETERNUS DX support
This driver had them correspond to FUJITSU ETERNUS DX.
This driver supports ETERNUS DX S2 and the S3 series.
The driver runs volume operations by communicating with the backend
FUJITSU storage. It uses a CIM client in Python called PyWBEM to perform
CIM operations over HTTP.
Support protocols
-Fibre Channel
-iSCSI
Support functions
-create/delete volume
-create/delete snapshot
-attach/detach volume
-list snapshots
-create volume from snapshot
-create volume from image
-create volume from volume
-create image from volume
-extend volume
-volume migration(host assisted)
driver-cert result
fc : https://bugs.launchpad.net/cinder/+bug/1359467
iscsi : https://bugs.launchpad.net/cinder/+bug/1359569
Zhiteng Huang [Sat, 23 Aug 2014 18:32:57 +0000 (18:32 +0000)]
Pool-aware Scheduler Support
This change introduces pool-aware scheduler to address the need for
supporting multiple pools from one storage controller.
Terminology
-----------
Pool - A logical concept to describe a set of storage resource that
can be used to serve core Cinder requests, e.g. volumes/snapshots.
This notion is almost identical to Cinder Volume Backend, for it
has simliar attributes (capacity, capability). The main difference
is Pool couldn't exist on its own, it must reside in a Volume
Backend. One Volume Backend can have mulitple Pools but Pools
don't have sub-Pools (meaning even they have, sub-Pools don't get
to exposed to Cinder, yet). Pool has a unique name in backend
namespace, which means Volume Backend can't have two pools using
same name.
Legacy Volume - Volumes that were created prior pools are introduced.
There are several corner cases where legacy volumes could cause
issues, especially for those drivers used to do pools internally
(e.g. 3Par, NetApp). Please refer to 'Limitation/Known Issues' for
details.
Design
------
The workflow in this change is simple:
1) Volume Backends reports how many pools and what those pools
look like and are capable of to scheduler;
2) When request comes in, scheduler picks a pool that fits the need
most to serve the request, it passes the request to the backend
where the target pool resides in;
3) Volume driver gets the message and let the target pool to serve
the request as scheduler instructed.
To support placing resources (volume/snapshot) onto a pool, these
pieces in Cinder currently are missing:
1. Volume Backends reporting capacity/capabilities at pool level;
2. Scheduler filtering/weighing based on pool capacity/capability
and placing volumes/snapshots to a pool of certain backend;
3. Record which pool a resource is located on a backend and passes
between scheduler and volume backend.
The missing piece 1 is solved by a) updating the format of
periodical volume stats message to adopt pool stats; b) altering
manager/driver to collect and report pool stats. Below is an example
of the updated report message that contains 2 pools:
Notice that there are now two levels of mandatory/fixed stats &
capabilities that every volume driver should report.
The scheduler change is mostly done in scheduler/host_manager.py:
* HostState adds a list element to hold PoolState(s) (a HostState sub-
class). Each PoolState can be treated like a HostState since it has
as much information as HostState and also share the same structure.
HostState.update_capabilities()/update_from_volume_capability() are
changed to handle both legacy and new report messages.
* HostManager.get_all_host_states() now returns a PoolState iterator
that includes all pools scheduler tracks. To filters and weighers,
PoolState() and HostState() are identical, thus there is no need to
change filters and weighers since they are dealing with same kind of
information and exact same data strucuture as before. What filter
and weigher deals with looks like this:
# Before this change
HostState() for Backend 1
...
HostState() for Backend N
# After this change
PoolState() for Backend 1 Pool 1
...
PoolState() for Backend 1 Pool N
...
PoolState() for Backend N Pool 1
...
PoolState() for Backend N Pool N
With this change, filter scheduler will pick a pool@host instead of a
host.
Now that we are able to report and decide at pool level, the 3rd
missing piece is easy to fix. Just like how multi-backend
volume service is supported, we encoded pool name into 'host' field
of Volume table. The 'host' field is now 'host@backend#pool'. Notice
that this change doesn't mean that cinder-volume service will have to
subscribe to multiple RPC channels. There is no need to mess with
message queue subscription at all because we did a little trick when
determining RPC target in VolumeRPCAPI: correct host info like
'host@backend' is extracted from 'host@backend#pool' before sending
RPC messages. Therefore, once scheduler decides which pool on a
backend shall serve a request, it updates 'host' field of the volume
record in DB to be like 'host@backend#pool', but it still sends RPC
message to 'host@backend' which cinder-volume is listening to.
Simliar action is done when creating backups for volumes. host@backend
is extracted from volume['host'] so that correct backup service can be
picked.
Other changes are done in this patch:
* Change get_volume_stats() in ISCSIVolumeDriver/ISERDriver to include
pool stats, and change default total/free_capacity_gb from 'infinite'
to 0.
* Add logic in volume manager init_host() to detect legacy volumes and
try to update host info for them if driver is able to provide pool info.
* Add get_pool() driver API to return the pool name of given volume,
this help volume manager to handle legacy volumes, especially for those
backends already support pooling internally (e.g. 3Par, NetApp).
* Implement get_pool() for LVM driver to return volume backend name as
pool name.
* Add extract_host() helper function in cinder/volume/utils.py to help
handle cases where there is needs to extract 'host', 'backend' or
'pool' information from volume['host'].
* Add append_host() helper function in cinder/volume/utils.py to help
concatenate host and pool string into one for volume['host'] field.
Limitation/Known Issues
-----------------------
* The term 'host' in Cinder used to refer to 'backend', and it was
consistent from the view of end users/admins to Cinder internals.
Now that pool is exposed to Cinder scheduler, scheduler starts treating
different pools on same backend as different hosts. Therefore, we have
to expose pool to admin at least, because migrating volume now has to
include pool in 'host' parameter in order to work. As for end users,
the whole 'host' of volume equals storage backend idea works well for
them, so they can decide the policy of migration when retyping volumes,
or choose to create new volume on same or different host as/from
existing volumes. Now it's *not* easy to hide pool from end user and
make retype or affinity filter work like before. This change has a
speical code path for legacy volumes, to allow (potential) migration
between pools even migration_policy is set to 'never'. But not every
driver has magic to move volumes to one pool to another at minimum
cost. The inconsistency behavior between drivers (same command may
take totally different time to finish), which could be very confusing.
* Drivers want to support pools need to update, but they should work
just like they used to without any change except:
- creating volume using same/different host hints with legacy volumes
may NOT work as expected, because 'hostA' is considered different
from 'hostA#pool0' and 'hostA#pool1'. But legacy volume on 'hostA'
might actually resides in pool0, only the driver has this knowledge.
- retyping legacy volume issue as mentioned above.
Ultimate solution for all these corner cases is to update Cinder DB
to add 'pool' info for legacy volumes. The problem is only the driver
knows such info, that is why we add a new driver API get_pool() to
so that volume manager is able to learn pool info from driver and
update host field of legacy volumes in DB.
User-Visible Change
-------------------
DocImpact
For managing and migrating volumes, now user needs to provide pool
information as part of host string. For example:
cinder manage --source-name X --name newX host@backend#POOL
cinder migrate UUID host@backend#POOL
Vincent Hou [Wed, 23 Jul 2014 04:06:38 +0000 (00:06 -0400)]
Add QoS support to IBM Storwize driver
IBM Storwize driver can be enabled with QoS support by setting the parameter
I/O throttling rate, which caps the amount of I/O.
This patch add the QoS configuration to the methods of create_volume,
create_volume_from_snapshot, create_cloned_volume and retype.
The QoS for IBM Storwize storage can be configured in 3 ways:
* Add the key "qos:IOThrottling" into a QoS spec and associate this QoS
spec with a volume type.
* Add the key "qos:IOThrottling" into the extra spec of a volume type.
* Add a metadata with the key "qos:IOThrottling".
Ed Balduf [Wed, 7 May 2014 19:04:14 +0000 (13:04 -0600)]
Driver for Fusion-io ioControl Hybrid array
Standard Cinder driver for iSCSI target array.
Uses REST API of the ioControl array.
Implements Quality of Service through the 5 Policies avaliable on the array.
Add Apache License block
Test results: https://bugs.launchpad.net/cinder/+bug/1317248
Jay S. Bryant [Thu, 14 Aug 2014 20:03:10 +0000 (15:03 -0500)]
Make ssh-host-key-policy configurable
This patch adds configuration options for ssh_hosts_key_file and
strict_ssh_host_key_policy. You can set strict_ssh_host_key_policy
to 'True' or 'False'. If set to false the first connection of a host
will cause it to be added to the known_hosts file. Subsequent connections
will be enforced against the existing key. Changes in the key are assumed
to be a Man-in-the-Middle attack so the connection is rejected.
If strict_ssh_host_key_policy is 'True' the key for the host that is
being connected to must be in the hosts_key_file. No first connection
assumptions are made.
strict_ssh_host_key_policy is set to 'False' to keep functionality similar
to the existing functionality.
With this patch, a default of $state_path/ssh_known_hosts is used for the
known_hosts file. Unlike the previous approach, this now requires the
user to have a known_hosts file that is writable, somewhere. The option
is configurable if they don't want to use $state_path/ssh_known_hosts
DocImpact: Need to document the new strict_ssh_host_key_policy as well
as the ssh_hosts_key_file. A note should be made for drivers that may
pass a hosts_key_file via kwargs when creating an ssh pool: their file
will be loaded along with the file configured via /etc/cinder.conf.
Also worth noting, for development environments, an ssh_hosts_key_file of
/dev/null and a strict_ssh_host_key_policy setting of 'False' may be used.
Using those setting will ignore these changes.