-=========================================================================================
-README for Quantum v2.0:
-A Plugin Framework for Supporting Quantum Networks Spannning Multiple Switches
-=========================================================================================
+Cisco Quantum Virtual Network Plugin
-Introduction
-------------
+This plugin implements Quantum v2 APIs and helps configure
+topologies consisting of virtual and physical switches.
-This plugin implementation provides the following capabilities:
-
-* A reference implementation for a Quantum Plugin Framework
-(For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)
-* Supports multiple switches in the network
-* Supports multiple models of switches concurrently
-* Supports use of multiple L2 technologies
-* Supports the Cisco Nexus family of switches.
-* Supports Cisco UCS blade servers with M81KR Virtual Interface Cards
- (aka "Palo adapters") via 802.1Qbh.
-
-Pre-requisites
---------------
-(The following are necessary only when using the UCS and/or Nexus devices in your system.
-If you plan to just leverage the plugin framework, you do not need these.)
-
-If you are using a Nexus switch in your topology, you'll need the following
-NX-OS version and packages to enable Nexus support:
-* NX-OS 5.2.1 (Delhi) Build 69 or above.
-* paramiko library - SSHv2 protocol library for python
-* ncclient v0.3.1 - Python library for NETCONF clients
- ** You need a version of ncclient modifed by Cisco Systems.
- To get it, from your shell prompt do:
-
- git clone git@github.com:CiscoSystems/ncclient.git
- sudo python ./setup.py install
-
- ** For more information of ncclient, see:
- http://schmizz.net/ncclient/
-
-* One or more UCS B200 series blade servers with M81KR VIC (aka
- Palo adapters) installed.
-* UCSM 2.0 (Capitola) Build 230 or above.
-* OS supported:
- ** RHEL 6.1 or above
- ** Ubuntu 11.10 or above
- ** Package: python-configobj-4.6.0-3.el6.noarch (or newer)
- ** Package: python-routes-1.12.3-2.el6.noarch (or newer)
- ** Package: pip install mysql-python
-
-
-Module Structure:
------------------
-* quantum/plugins/cisco/ - Contains the Network Plugin Framework
- /client - CLI module for core and extensions API
- /common - Modules common to the entire plugin
- /conf - All configuration files
- /db - Persistence framework
- /models - Class(es) which tie the logical abstractions
- to the physical topology
- /nova - Scheduler and VIF-driver to be used by Nova
- /nexus - Nexus-specific modules
- /segmentation - Implementation of segmentation manager,
- e.g. VLAN Manager
- /services - Set of orchestration libraries to insert
- In-path Networking Services
- /tests - Tests specific to this plugin
- /ucs - UCS-specific modules
-
-
-Plugin Installation Instructions
-----------------------------------
-1. Make a backup copy of quantum/etc/quantum.conf
-
-2. Edit quantum/etc/quantum.conf and edit the "core_plugin" for v2 API
-
-core_plugin = quantum.plugins.cisco.network_plugin.PluginV2
-
-3. MySQL database setup:
- 3a. Create quantum_l2network database in mysql with the following command -
-
-mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
-
- 3b. Enter the quantum_l2network database configuration info in the
- quantum/plugins/cisco/conf/db_conn.ini file.
-
-4. If you want to turn on support for Cisco Nexus switches:
- 4a. Uncomment the nexus_plugin property in
- etc/quantum/plugins/cisco/cisco_plugins.ini to read:
-
-[PLUGINS]
-nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
-
- 4b. Enter the relevant configuration in the
- etc/quantum/plugins/cisco/nexus.ini file. Example:
-
-[SWITCH]
-# Change the following to reflect the IP address of the Nexus switch.
-# This will be the address at which Quantum sends and receives configuration
-# information via SSHv2.
-nexus_ip_address=10.0.0.1
-# Port numbers on the Nexus switch to each one of the compute nodes are connected
-# Use shortened interface syntax, e.g. "1/10" not "Ethernet1/10" and "," between ports.
-ports=1/10,1/11,1/12
-#Port number where SSH will be running on the Nexus switch. Typically this is 22
-#unless you've configured your switch otherwise.
-nexus_ssh_port=22
-
-[DRIVER]
-name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
-
- 4c. Make sure that SSH host key of the Nexus switch is known to the
- host on which you are running the Quantum service. You can do
- this simply by logging in to your Quantum host as the user that
- Quantum runs as and SSHing to the switch at least once. If the
- host key changes (e.g. due to replacement of the supervisor or
- clearing of the SSH config on the switch), you may need to repeat
- this step and remove the old hostkey from ~/.ssh/known_hosts.
-
-5. If your are using UCS blade servers with M81KR Virtual Interface Cards and
- want to leverage the VM-FEX features,
-
- 5a. Uncomment the ucs_plugin propertes in
- etc/quantum/plugins/cisco/cisco_plugins.ini to read:
-
-[PLUGINS]
-ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin
-[INVENTORY]
-ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_inventory_v2.UCSInventory
-
- 5b. Enter the relevant configuration in the
- etc/quantum/plugins/cisco/ucs.ini file. Example:
-
-[UCSM]
-#change the following to the appropriate UCSM IP address
-#if you have more than one UCSM, enter info from any one
-ip_address=<put_ucsm_ip_address_here>
-default_vlan_name=default
-default_vlan_id=1
-max_ucsm_port_profiles=1024
-profile_name_prefix=q-
-
-[DRIVER]
-name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver
-
- 5c. Configure the UCS systems' information in your deployment by editing the
- quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple
- UCSMs per deployment, multiple chassis per UCSM, and multiple blades per
- chassis. Chassis ID and blade ID can be obtained from the UCSM (they will
- typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact
- hostname as nova sees it (the host column in the services table of the nova
- DB will give you that information).
-
-[ucsm-1]
-ip_address = <put_ucsm_ip_address_here>
-[[chassis-1]]
-chassis_id = <put_the_chassis_id_here>
-[[[blade-1]]]
-blade_id = <put_blade_id_here>
-host_name = <put_hostname_here>
-[[[blade-2]]]
-blade_id = <put_blade_id_here>
-host_name = <put_hostname_here>
-[[[blade-3]]]
-blade_id = <put_blade_id_here>
-host_name = <put_hostname_here>
-
-[ucsm-2]
-ip_address = <put_ucsm_ip_address_here>
-[[chassis-1]]
-chassis_id = <put_the_chassis_id_here>
-[[[blade-1]]]
-blade_id = <put_blade_id_here>
-host_name = <put_hostname_here>
-[[[blade-2]]]
-blade_id = <put_blade_id_here>
-host_name = <put_hostname_here>
-
- 5d. Configure your OpenStack installation to use the 802.1qbh VIF driver and
- Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the
- following entries:
-
-scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.QuantumPortAwareScheduler
-quantum_host=127.0.0.1
-quantum_port=9696
-libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.Libvirt802dot1QbhDriver
-libvirt_vif_type=802.1Qbh
-
- Note: To be able to bring up a VM on a UCS blade, you should first create a
- port for that VM using the Quantum create port API. VM creation will
- fail if an unused port is not available. If you have configured your
- Nova project with more than one network, Nova will attempt to instantiate
- the VM with one network interface (VIF) per configured network. To provide
- plugin points for each of these VIFs, you will need to create multiple
- Quantum ports, one for each of the networks, prior to starting the VM.
- However, in this case you will need to use the Cisco multiport extension
- API instead of the Quantum create port API. More details on using the
- multiport extension follow in the section on multi NIC support.
-
- To support the above configuration, you will need some Quantum modules. It's easiest
- to copy the entire quantum directory from your quantum installation into:
-
- /usr/lib/python2.7/site-packages/
-
- This needs to be done on each nova compute node.
-
-7. Verify that you have the correct credentials for each IP address listed
- in quantum/plugins/cisco/conf/credentials.ini. Example:
-
-# Provide the UCSM credentials, create a separte entry for each UCSM used in your system
-# UCSM IP address, username and password.
-[10.0.0.2]
-username=admin
-password=mySecretPasswordForUCSM
-
-# Provide the Nexus credentials, if you are using Nexus switches.
-# If not this will be ignored.
-[10.0.0.1]
-username=admin
-password=mySecretPasswordForNexus
-
- In general, make sure that every UCSM and Nexus switch used in your system,
- has a credential entry in the above file. This is required for the system to
- be able to communicate with those switches.
-
-
-9. Start the Quantum service. If something doesn't work, verify the
- your configuration of each of the above files.
-
-
-Multi NIC support for VMs
--------------------------
-As indicated earlier, if your Nova setup has a project with more than one network,
-Nova will try to create a virtual network interface (VIF) on the VM for each of those
-networks. Before each VM is instantiated, you should create Quantum ports on each of
-those networks. These ports need to be created using the following rest call:
-
-POST /1.0/extensions/csco/tenants/{tenant_id}/multiport/
-
-with request body:
-
-{'multiport':
- {'status': 'ACTIVE',
- 'net_id_list': net_id_list,
- 'ports_desc': {'key': 'value'}}}
-
-where,
-
-net_id_list is a list of network IDs: [netid1, netid2, ...]. The "ports_desc" dictionary
-is reserved for later use. For now, the same structure in terms of the dictionary name, key
-and value should be used.
-
-The corresponding CLI for this operation is as follows:
-
-PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport <tenant_id> <net_id1,net_id2,...>
-
- (Note that you should not be using the create port core API in the above case.)
-
-Using an independent plugin as a device sub-plugin
--------------------------------------------------
-
-If you would like to use an independent virtual switch plugin as one of the sub-plugins
-(for eg: the OpenVSwitch plugin) with the nexus device sub-plugin perform the following steps:
-
-(The following instructions are with respect to the OpenVSwitch plugin)
-1. Update etc/quantum/plugins/cisco/l2network_plugin.ini
- In the [MODEL] section of the configuration file put the following configuration
- (note that this should be the only configuration in this section, all other configuration
- should be either removed or commented)
-
- model_class=quantum.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
-
-2. Update etc/quantum/plugins/cisco/cisco_plugins.ini
- In the [PLUGINS] section of the configuration file put the following configuration:
-
- vswitch_plugin=quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
-
-3. Set the DB name, the same name has to be configured in three places:
- In etc/quantum/plugins/cisco/conf/db_conn.ini set the "name" value
- In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini set the "sql_connection"
- In /etc/quantum/dhcp_agent.ini set the "db_connection"
-
-4. The range of VLAN IDs has to be set in the OpenVSwitch configuration file:
- In /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
- Set:
- vlan_min = <lower_id>
- vlan_max = <higher_id>
- enable_tunneling = False
-
-5. For Nexus device sub-plugin configuration refer to the above sections
-
-How to test the installation
-----------------------------
-The unit tests are located at quantum/plugins/cisco/tests/unit/v2. They can be
-executed from the top level Quantum directory using the run_tests.sh script.
-
-1. Testing the core API (without UCS/Nexus/RHEL device sub-plugins configured):
- By default all the device sub-plugins are disabled (commented out) in
- etc/quantum/plugins/cisco/cisco_plugins.ini
-
- ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_api_v2
- ./run_tests.sh quantum.plugins.cisco.tests.unit.v2.test_network_plugin
-
-2. For testing the Nexus device sub-plugin perform the following configuration:
-
- Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add:
- In the [PLUGINS] section add:
-nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin_v2.NexusPlugin
-
- Edit the etc/quantum/plugins/cisco/nexus.ini file.
- When not using Nexus hardware use the following dummy configuration verbatim:
-[SWITCH]
-nexus_ip_address=1.1.1.1
-ports=1/10,1/11,1/12
-nexus_ssh_port=22
-[DRIVER]
-name=quantum.plugins.cisco.tests.unit.v2.nexus.fake_nexus_driver.CiscoNEXUSFakeDriver
- Or when using Nexus hardware (put the values relevant to your setup):
-[SWITCH]
-nexus_ip_address=1.1.1.1
-ports=1/10,1/11,1/12
-nexus_ssh_port=22
-[DRIVER]
-name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
-
- (Note: Make sure that quantum/plugins/cisco/conf/credentials.ini has an entry for
- the nexus_ip_address being used in the above cases)
-
-3. For testing the UCS device sub-plugin perform the following configuration:
-
- Edit etc/quantum/plugins/cisco/cisco_plugins.ini to add:
- In the [PLUGINS] section add:
-ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_plugin_v2.UCSVICPlugin
-
- In the [INVENTORY] section add:
- When not using UCS hardware:
-ucs_plugin=quantum.plugins.cisco.tests.unit.v2.ucs.cisco_ucs_inventory_fake.UCSInventory
- Or when using UCS hardware:
-ucs_plugin=quantum.plugins.cisco.ucs.cisco_ucs_inventory_v2.UCSInventory
-
- Edit the etc/quantum/plugins/cisco/ucs.ini file.
- When not using UCS hardware:
-[DRIVER]
-name=quantum.plugins.cisco.tests.unit.v2.ucs.fake_ucs_driver.CiscoUCSMFakeDriver
- Or when using UCS hardware:
-[DRIVER]
-name=quantum.plugins.cisco.ucs.cisco_ucs_network_driver.CiscoUCSMDriver
-
-
-:Web site: http://wiki.openstack.org/cisco-quantum
-:Copyright: 2012 Cisco Systems, Inc.
-:Contact: netstack@lists.launchpad.net
-
-=========================================================================================
-README for Quantum v1 and v1.1:
-A Quantum Plugin Framework for Supporting L2 Networks Spannning Multiple Switches
-=========================================================================================
-
-:Author: Sumit Naiksatam, Ram Durairaj, Mark Voelker, Edgar Magana, Shweta Padubidri,
- Rohit Agarwalla, Ying Liu, Debo Dutta
-:Contact: netstack@lists.launchpad.net
-:Web site: https://launchpad.net/~cisco-openstack
-:Copyright: 2011 Cisco Systems, Inc.
-
-.. contents::
-
-Introduction
-------------
-
-This plugin implementation provides the following capabilities
-to help you take your Layer 2 network for a Quantum leap:
-
-* A reference implementation for a Quantum Plugin Framework
-(For details see: http://wiki.openstack.org/quantum-multi-switch-plugin)
-* Supports multiple switches in the network
-* Supports multiple models of switches concurrently
-* Supports use of multiple L2 technologies
-* Supports Cisco UCS blade servers with M81KR Virtual Interface Cards
- (aka "Palo adapters") via 802.1Qbh.
-* Supports the Cisco Nexus family of switches.
-
-It does not provide:
-
-* A hologram of Al that only you can see.
-* A map to help you find your way through time.
-* A cure for amnesia or your swiss-cheesed brain.
-
-Let's leap in!
-
-Pre-requisites
---------------
-(The following are necessary only when using the UCS and/or Nexus devices in your system.
-If you plan to just leverage the plugin framework, you do not need these.)
-* One or more UCS B200 series blade servers with M81KR VIC (aka
- Palo adapters) installed.
-* UCSM 2.0 (Capitola) Build 230 or above.
-* OpenStack Diablo D3 or later (should have VIF-driver support)
-* OS supported:
- ** RHEL 6.1 or above
- ** Ubuntu 11.10 or above
- ** Package: python-configobj-4.6.0-3.el6.noarch (or newer)
- ** Package: python-routes-1.12.3-2.el6.noarch (or newer)
- ** Package: pip install mysql-python
-
-If you are using a Nexus switch in your topology, you'll need the following
-NX-OS version and packages to enable Nexus support:
-* NX-OS 5.2.1 (Delhi) Build 69 or above.
-* paramiko library - SSHv2 protocol library for python
-* ncclient v0.3.1 - Python library for NETCONF clients
- ** You need a version of ncclient modifed by Cisco Systems.
- To get it, from your shell prompt do:
-
- git clone git@github.com:CiscoSystems/ncclient.git
- sudo python ./setup.py install
-
- ** For more information of ncclient, see:
- http://schmizz.net/ncclient/
-
-Module Structure:
------------------
-* quantum/plugins/cisco/ - Contains the L2-Network Plugin Framework
- /client - CLI module for core and extensions API
- /common - Modules common to the entire plugin
- /conf - All configuration files
- /db - Persistence framework
- /models - Class(es) which tie the logical abstractions
- to the physical topology
- /nova - Scheduler and VIF-driver to be used by Nova
- /nexus - Nexus-specific modules
- /segmentation - Implementation of segmentation manager,
- e.g. VLAN Manager
- /services - Set of orchestration libraries to insert
- In-path Networking Services
- /tests - Tests specific to this plugin
- /ucs - UCS-specific modules
-
-
-Plugin Installation Instructions
-----------------------------------
-1. Make a backup copy of quantum/etc/quantum.conf
-
-2. Edit quantum/etc/quantum.conf and edit the "core_plugin" for v2 API
-
-core_plugin = quantum.plugins.cisco.network_plugin.PluginV2
-
- OR for v1.1 API
-
-core_plugin = quantum.plugins.cisco.l2network_plugin.L2Network
-
-3. Configure your OpenStack installation to use the 802.1qbh VIF driver and
- Quantum-aware scheduler by editing the /etc/nova/nova.conf file with the
- following entries:
-
---scheduler_driver=quantum.plugins.cisco.nova.quantum_port_aware_scheduler.QuantumPortAwareScheduler
---quantum_host=127.0.0.1
---quantum_port=9696
---libvirt_vif_driver=quantum.plugins.cisco.nova.vifdirect.Libvirt802dot1QbhDriver
---libvirt_vif_type=802.1Qbh
-
- Note: To be able to bring up a VM on a UCS blade, you should first create a
- port for that VM using the Quantum create port API. VM creation will
- fail if an unused port is not available. If you have configured your
- Nova project with more than one network, Nova will attempt to instantiate
- the VM with one network interface (VIF) per configured network. To provide
- plugin points for each of these VIFs, you will need to create multiple
- Quantum ports, one for each of the networks, prior to starting the VM.
- However, in this case you will need to use the Cisco multiport extension
- API instead of the Quantum create port API. More details on using the
- multiport extension follow in the section on multi NIC support.
-
-4. To support the above configuration, you will need some Quantum modules. It's easiest
- to copy the entire quantum directory from your quantum installation into:
-
-/usr/lib/python2.7/site-packages/
-
- This needs to be done for each nova compute node.
-
-5. If you want to turn on support for Cisco Nexus switches:
- 5a. Uncomment the nexus_plugin property in
- etc/quantum/plugins/cisco/cisco_plugins.ini to read:
-
-nexus_plugin=quantum.plugins.cisco.nexus.cisco_nexus_plugin.NexusPlugin
-
- 5b. Enter the relevant configuration in the
- etc/quantum/plugins/cisco/nexus.ini file. Example:
-
-[SWITCH]
-# Change the following to reflect the IP address of the Nexus switch.
-# This will be the address at which Quantum sends and receives configuration
-# information via SSHv2.
-nexus_ip_address=10.0.0.1
-# Port numbers on the Nexus switch to each one of the UCSM 6120s is connected
-# Use shortened interface syntax, e.g. "1/10" not "Ethernet1/10".
-nexus_first_port=1/10
-nexus_second_port=1/11
-#Port number where SSH will be running on the Nexus switch. Typically this is 22
-#unless you've configured your switch otherwise.
-nexus_ssh_port=22
-
-[DRIVER]
-name=quantum.plugins.cisco.nexus.cisco_nexus_network_driver.CiscoNEXUSDriver
-
- 5c. Make sure that SSH host key of the Nexus switch is known to the
- host on which you are running the Quantum service. You can do
- this simply by logging in to your Quantum host as the user that
- Quantum runs as and SSHing to the switch at least once. If the
- host key changes (e.g. due to replacement of the supervisor or
- clearing of the SSH config on the switch), you may need to repeat
- this step and remove the old hostkey from ~/.ssh/known_hosts.
-
-6. Plugin Persistence framework setup:
- 6a. Create quantum_l2network database in mysql with the following command -
-
-mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
-
- 6b. Enter the quantum_l2network database configuration info in the
- quantum/plugins/cisco/conf/db_conn.ini file.
-
- 6c. If there is a change in the plugin configuration, service would need
- to be restarted after dropping and re-creating the database using
- the following commands -
-
-mysql -u<mysqlusername> -p<mysqlpassword> -e "drop database quantum_l2network"
-mysql -u<mysqlusername> -p<mysqlpassword> -e "create database quantum_l2network"
-
-7. Verify that you have the correct credentials for each IP address listed
- in quantum/plugins/cisco/conf/credentials.ini. Example:
-
-# Provide the UCSM credentials, create a separte entry for each UCSM used in your system
-# UCSM IP address, username and password.
-[10.0.0.2]
-username=admin
-password=mySecretPasswordForUCSM
-
-# Provide the Nexus credentials, if you are using Nexus switches.
-# If not this will be ignored.
-[10.0.0.1]
-username=admin
-password=mySecretPasswordForNexus
-
- In general, make sure that every UCSM and Nexus switch used in your system,
- has a credential entry in the above file. This is required for the system to
- be able to communicate with those switches.
-
-8. Configure the UCS systems' information in your deployment by editing the
- quantum/plugins/cisco/conf/ucs_inventory.ini file. You can configure multiple
- UCSMs per deployment, multiple chassis per UCSM, and multiple blades per
- chassis. Chassis ID and blade ID can be obtained from the UCSM (they will
- typically be numbers like 1, 2, 3, etc.). Also make sure that you put the exact
- hostname as nova sees it (the host column in the services table of the nova
- DB will give you that information).
-
-[ucsm-1]
-ip_address = <put_ucsm_ip_address_here>
-[[chassis-1]]
-chassis_id = <put_the_chassis_id_here>
-[[[blade-1]]]
-blade_id = <put_blade_id_here>
-host_name = <put_hostname_here>
-[[[blade-2]]]
-blade_id = <put_blade_id_here>
-host_name = <put_hostname_here>
-[[[blade-3]]]
-blade_id = <put_blade_id_here>
-host_name = <put_hostname_here>
-
-[ucsm-2]
-ip_address = <put_ucsm_ip_address_here>
-[[chassis-1]]
-chassis_id = <put_the_chassis_id_here>
-[[[blade-1]]]
-blade_id = <put_blade_id_here>
-host_name = <put_hostname_here>
-[[[blade-2]]]
-blade_id = <put_blade_id_here>
-host_name = <put_hostname_here>
-
-
-9. Start the Quantum service. If something doesn't work, verify that
- your configuration of each of the above files hasn't gone a little kaka.
- Once you've put right what once went wrong, leap on.
-
-
-Multi NIC support for VMs
--------------------------
-As indicated earlier, if your Nova setup has a project with more than one network,
-Nova will try to create a virtual network interface (VIF) on the VM for each of those
-networks. That implies -
-
- (1) You should create the same number of networks in Quantum as in your Nova
- project.
-
- (2) Before each VM is instantiated, you should create Quantum ports on each of those
- networks. These ports need to be created using the following rest call:
-
-POST /1.0/extensions/csco/tenants/{tenant_id}/multiport/
-
-with request body:
-
-{'multiport':
- {'status': 'ACTIVE',
- 'net_id_list': net_id_list,
- 'ports_desc': {'key': 'value'}}}
-
-where,
-
-net_id_list is a list of network IDs: [netid1, netid2, ...]. The "ports_desc" dictionary
-is reserved for later use. For now, the same structure in terms of the dictionary name, key
-and value should be used.
-
-The corresponding CLI for this operation is as follows:
-
-PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport <tenant_id> <net_id1,net_id2,...>
-
- (Note that you should not be using the create port core API in the above case.)
-
-
-Using the Command Line Client to work with this Plugin
-------------------------------------------------------
-A command line client is packaged with this plugin. This module can be used
-to invoke the core API as well as the extensions API, so that you don't have
-to switch between different CLI modules (it internally invokes the Quantum
-CLI module for the core APIs to ensure consistency when using either). This
-command line client can be invoked as follows:
-
-PYTHONPATH=.:tools python quantum/plugins/cisco/client/cli.py
-
-1. Creating the network
-
-# PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_net -H 10.10.2.6 demo net1
-Created a new Virtual Network with ID: c4a2bea7-a528-4caf-b16e-80397cd1663a
-for Tenant demo
-
-
-2. Listing the networks
-
-# PYTHONPATH=. python quantum/plugins/cisco/client/cli.py list_nets -H 10.10.2.6 demo
-Virtual Networks for Tenant demo
- Network ID: 0e85e924-6ef6-40c1-9f7a-3520ac6888b3
- Network ID: c4a2bea7-a528-4caf-b16e-80397cd1663a
-
-
-3. Creating one port on each of the networks
-
-# PYTHONPATH=. python quantum/plugins/cisco/client/cli.py create_multiport -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a,0e85e924-6ef6-40c1-9f7a-3520ac6888b3
-Created ports: {u'ports': [{u'id': u'118ac473-294d-480e-8f6d-425acbbe81ae'}, {u'id': u'996e84b8-2ed3-40cf-be75-de17ff1214c4'}]}
-
-
-4. List all the ports on a network
-
-# PYTHONPATH=. python quantum/plugins/cisco/client/cli.py list_ports -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a
-Ports on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
-for Tenant: demo
- Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae
-
-
-5. Show the details of a port
-
-# PYTHONPATH=. python quantum/plugins/cisco/client/cli.py show_port -H 10.10.2.6 demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae
-Logical Port ID: 118ac473-294d-480e-8f6d-425acbbe81ae
-administrative State: ACTIVE
-interface: <none>
-on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
-for Tenant: demo
-
-
-6. Start the VM instance using Nova
- Note that when using UCS and the 802.1Qbh features, the association of the
- VIF-ID (also referred to as interface ID) on the VM's NIC with a port will
- happen automatically when the VM is instantiated. At this point, doing a
- show_port will reveal the VIF-ID associated with the port. To indicate that
- this VIF-ID is still detached from the network it would eventually be on, you
- will see the suffix "(detached)" on the VIF-ID. This indicates that although
- the VIF-ID and the port have been associated, the VIF still does not have
- connectivity to the network on which the port resides. That connectivity
- will be established only after the plug/attach operation is performed (as
- described in the next step).
-
-# PYTHONPATH=. python quantum/plugins/cisco/client/cli.py show_port demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae
-Logical Port ID: 118ac473-294d-480e-8f6d-425acbbe81ae
-administrative State: ACTIVE
-interface: b73e3585-d074-4379-8dde-931c0fc4db0e(detached)
-on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
-for Tenant: demo
-
-
-7. Plug interface and port into the network
- Use the interface information obtained in step 6 to plug the interface into
- the network.
-
-# PYTHONPATH=. python quantum/plugins/cisco/client/cli.py plug_iface demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae b73e3585-d074-4379-8dde-931c0fc4db0e
-Plugged interface b73e3585-d074-4379-8dde-931c0fc4db0e
-into Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae
-on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
-for Tenant: demo
-
-
-8. Unplug an interface and port from the network
-
-# PYTHONPATH=. python quantum/plugins/cisco/client/cli.py unplug_iface demo c4a2bea7-a528-4caf-b16e-80397cd1663a 118ac473-294d-480e-8f6d-425acbbe81ae
-Unplugged interface from Logical Port: 118ac473-294d-480e-8f6d-425acbbe81ae
-on Virtual Network: c4a2bea7-a528-4caf-b16e-80397cd1663a
-for Tenant: demo
-
- Note: After unplugging, if you check the details of the port, you will
- see the VIF-IF associated with the port (but now suffixed with the state
- "detached"). At this point, it is possible to plug the VIF into the network
- again making use of the same VIF-ID. In general, once associated, the VIF-ID
- cannot be disassociated with the port until the VM is terminated. After the
- VM is terminated, the VIF-ID will be automatically disassociated from the
- port. To summarize, association and disassociation of the VIF-ID with a port
- happens automatically at the time of creating and terminating the VM. The
- connectivity of the VIF to the network is controlled by the user via the
- plug and unplug operations.
-
-
-How to test the installation
-----------------------------
-The unit tests are located at quantum/plugins/cisco/tests/unit. They can be
-executed from the main folder using the run_tests.sh or to get a more detailed
-result the run_tests.py script.
-
-1. All unit tests (needs environment setup as indicated in the pre-requisites):
-
- Modify the environment variable to point to the plugin directory
-
- In bash : export PLUGIN_DIR=quantum/plugins/cisco
- tcsh/csh : setenv PLUGIN_DIR quantum/plugins/cisco
-
- When no sub-plugins are configured(ucs or nexus plugins), execute only the l2networkApi unit tests.
- Device-specific sub-plugins can be disabled by commenting out all the entries in:
- etc/quantum/plugins/cisco/cisco_plugins.ini
-
- Execute the v2 API tests only using:
- ./run_tests.sh quantum.plugins.cisco.tests.unit.test_api_v2
- Execute the v1.1 API tests only using:
- ./run_tests.sh quantum.plugins.cisco.tests.unit.test_l2networkApi
-
- If just the ucs or both ucs and the nexus plugins are configured then all the tests could be executed by
- ./run_tests.sh
-
- All the test execution steps listed below is for when the sub-plugins are configured.
-
-2. Testing the core API (without UCS/Nexus/RHEL hardware, and can be run on
- Ubuntu):
- The Core API can be tested by initially disabling all device plugins, then
- enabling just the UCS plugins, and finally enabling both the UCS and the
- Nexus plugins.
- Device-specific plugins can be disabled by commenting out the entries in:
- etc/quantum/plugins/cisco/cisco_plugins.ini
-
- Execute the v2 API tests only using:
- ./run_tests.sh quantum.plugins.cisco.tests.unit.test_api_v2
- or
- python run_tests.py quantum.plugins.cisco.tests.unit.test_api_v2
-
- Execute the v1.1 API tests only using:
- ./run_tests.sh quantum.plugins.cisco.tests.unit.test_l2networkApi
- or
- python run_tests.py quantum.plugins.cisco.tests.unit.test_l2networkApi
-
-3. Specific Plugin unit test (needs environment setup as indicated in the
- pre-requisites):
-
- ./run_tests.sh quantum.plugins.cisco.tests.unit.<name_of_the_module>
-
- or
-
- python run_tests.py quantum.plugins.cisco.tests.unit.<name_of_the_module>
- E.g.:
-
- python run_tests.py quantum.plugins.cisco.tests.unit.test_ucs_plugin
-
- To run specific tests, use the following:
-
- ./run_tests.sh quantum.plugins.cisco.tests.unit.<name_of_the_module>:<ClassName>.<funcName>
-
- or
-
- python run_tests.py
- quantum.plugins.cisco.tests.unit.<name_of_the_module>:<ClassName>.<funcName>
-
- Eg:
- python run_tests.py
- quantum.plugins.cisco.tests.unit.test_ucs_plugin:UCSVICTestPlugin.test_create_port
-
-4. Testing the Extension API
- The script is placed alongwith the other cisco unit tests. The location may
- change later.
- Location quantum/plugins/cisco/tests/unit/test_cisco_extension.py
-
- The script can be executed by :
- ./run_tests.sh quantum.plugins.cisco.tests.unit.test_cisco_extension
-
- or
-
- python run_tests.py quantum.plugins.cisco.tests.unit.test_cisco_extension
-
-
-Bingo bango bongo! That's it! Thanks for taking the leap into Quantum.
-
-...Oh, boy!
+For more details on use please refer to:
+http://wiki.openstack.org/cisco-quantum