Li Ma [Sun, 16 Mar 2014 11:32:22 +0000 (04:32 -0700)]
ML2 plugin involves in agent_scheduler migration
In agent_scheduler migration script which creates
network-dhcp-binding table and router-l3-binding table,
ML2 plugin should be included in the plugin list.
The workflow of creating a firewall when a router without any i/f is present in that tenant
causes a KeyError on the FWAgent. The issue occurs as such routers are present in the list of
routers returned by get_routers() but are not populated in the router_info dict. Adding a check
before accessing the dict to prevent the exception. When an i/f is added to such routers -
_router_added processing in the l3agent will populate the router_info dict and the FWAgent
also picks this up in that context.
NSX: fix intermetting UT failure on vshield test_router_create
Logstash shows this happened twice during the past 24 hours.
Let's be proactive and see if we can mitigate the random failure
by raising the poll time.
mathieu-rohon [Fri, 14 Mar 2014 09:17:55 +0000 (10:17 +0100)]
Send fdb remove message when a port is migrated
the fdb_remove rpc message is sent when the status
of the port goes to BUILD, that is when the new host
send a get_device_details which means that it owns
the migrated port. The fdb_add message will be sent
as soon as the new host send update_device_up
Aaron Rosen [Fri, 21 Feb 2014 00:18:11 +0000 (16:18 -0800)]
Notify nova when ports are ready
The following patch adds a callback from neutron to nova that notifies nova
when a port for an instance is *ready to be used*. After nova receives this
event it will then start the instance in a hope that when it comes up
its networking should be in working order.
NOTE: *ready to be used* currently means that a plugin changes the status
in the db associated with a port from:
NO_VALUE/PORT_STATUS_DOWN/PORT_STATUS_ACTIVE to ACTIVE/ERROR.
Neutron will then signal nova: network_vif_plugged:<status> where status
will either be 'completed' or 'failed' given the neutron port status.
Neutron also notifies nova when a port goes from status:
PORT_STATUS_ACTIVE to PORT_STATUS_DOWN and sends nova a network_vif_unplugged
event.
Currently this patch breaks multiregion support (i.e previously you could
back multiple nova regions by one neutron server) though now since neutron
needs to notify nova we'll need to add a way to determine which region a
given port is in.
For now the work around for this would be to set: notify_nova_port_active=False
in neutron to prevent neutron from sending the notification and setting:
vif_plugging_is_fatal=False in nova.conf. Doing this will keep the current
interaction where an instance will be booted without waiting for the network
to be ready.
Kevin Benton [Mon, 10 Mar 2014 20:49:51 +0000 (20:49 +0000)]
Stop mock patches by default in base test class
Adds a mock.patch.stopall to the base unit test
case cleanup routines to stop patches by default
in unit tests. This behavior can be overwritten
by setting an attribute on the test class if
required.
Also removes the explicit stops in the Cisco n1kv,
VMware, and NEC unit tests to leverage the new
automatic cleanup.
Aaron Rosen [Wed, 5 Mar 2014 19:32:39 +0000 (11:32 -0800)]
Query for port before calling l3plugin.disassociate_floatingips()
The call to l3plugin.disassociate_floatingips() trigggers several events
that could cause a timeout to occur trying to query the db for the port
therefore this patch changes the code to query first for the port.
This patch amends a data model in order to allow user to specify
the same gateway device multiple times in a network gateway
specification.
The corresponding migration is also being amended with this
patch. This approach has been preferred to developing another
migration since the migration being amended is still the most
recent one for the NSX plugin.
Paul Michali [Mon, 17 Feb 2014 20:56:22 +0000 (15:56 -0500)]
VPNaaS Device Driver for Cisco CSR
This is the device driver for the vendor specific VPNaaS plugin. This
change relies on the service driver code (review 74144), which is also
out for review.
Note: Support for sharing of IKE/IPSec policies (which is currently
prevented by the service driver code), will be done as a later
enhancement.
Note: Needs Tempest tests updated/created to test this.
Note: To run, this needs an out-of-band Cisco CSR installed and
configured.
Note: This uses a newer version of requests library and a new httmock
library. Until these are approved (75296), the UT will be
renamed to prevent testing the REST client API to the CSR.
Jakub Libosvar [Tue, 11 Mar 2014 16:11:38 +0000 (17:11 +0100)]
Create agents table when ML2 core_plugin is used
In Icehouse agents table is not created when ml2 is used but later in
migration chain is added uniq constraint on this table. This patch makes
agents table created when ml2 plugin is used as core_plugin.
Sometimes a vm gets its tap interface unset and reset too fast to be caught in
an agent loop, and its vlan tag was not reset.
We now detect if an interface loses its vlan tag, and if it happens the
interface will be reconfigured.
Since the TAG ID is only available via the "Port" table (in the 'tag' column),
we couldn't reuse the get_vif_port_set() method's run_vsctl call which queries
the "Interface" table, and needed a specific run_vsct call to the "Port" table
in the new get_port_tag_dict() method.
Itsuro Oda [Wed, 5 Feb 2014 01:12:26 +0000 (10:12 +0900)]
Call target plugin out of DB transaction in the Metaplugin
Previously, there are cases that a target plugin is called
within a DB transaction in the metaplugin. This possibly causes
"DB lock timeout" error since a target plugin may take a long
time (ex. communicate to a controller).
liudong [Mon, 4 Nov 2013 17:14:50 +0000 (01:14 +0800)]
After bulk create send DHCP notification
In the past send_dhcp_notification only handle single resource
because of VALID_RESOURCES = ['network', 'subnet', 'port'].
When I use bulk created ports to boot instance, the VMs could
not get IP address from DHCP agent. So I add some special handle
in function notify().
Itsuro Oda [Thu, 19 Dec 2013 01:51:19 +0000 (10:51 +0900)]
Fix lack of extended port's attributes in Metaplugin
Previously, there are some extended port's attributes did not
appear in the result of GET port API when using Metaplugin.
This become a critial issue currently since lack of port binding
information disturbs normal port binding operation in a compute
node, for example.
Metaplugin did not delegate get_port/get_ports to target plugins.
This is cause of the problem because right plugin instance is not
passed to a hook which handles extended attributes.
Now, get_port/get_ports of target plugins are called so that
extended port's attributes are handled properly.
Add support for tenant-provided NSX gateways devices
Add a new API resource specific to the NSX plugin for registering
tenant-owned NSX gateway devices with the NSX controller.
This API also allows tenants for managing gateway devices on the
NSX backend.
The behaviour of the net-gateway extension is mostly unchanged with
the only difference that newly created network gateways can now only
refer exlusively gateway devices registered using the API resource
introduced with this patch.
Having this at a WARN level makes no sense, especially
because the actually message does not say anything about
a warning condition. Probably a brain fart of some sort.
Make it DEBUG, just because it may come handy to see what
data we're passing.
NSX: Add ability to retry on 503's returned by the controller
There are a number of circumstances where the NSX controller
may return 503. Currently the API client does not retry, so
this patch adds a retry logic with timeout.
ronak [Thu, 6 Mar 2014 19:24:25 +0000 (11:24 -0800)]
Nuage plugin was missed in floatingip_status db migration script
The fix for bug 1287630 did not include the the Nuage plugin
in those affected by the migration.
Nuage CI started failing floatingip tests since than
reporting unknown column error.
This bug fix is to add nuage plugin to the list.
The resource alias is unchanged so there is really
no reason of dragging this forward.
If API clients are using the name or the description
in order to play with a server's extension they are
doing it wrong so they deserve the grief caused
by this change.
When using a cluster of load-balanced Neutron Servers,
the order of extension files being loaded may differ
from one server to another. This is usually harmless,
but it is better to force the list to be loaded in the
same way across the entire cluster, just in case
something funky is going on.