In Python 3, dhcp.LOG.process is not called when calling dhcp.LOG.debug. Since
what we really want to know is whether dhcp.LOG.debug is called, tweak the test
a little.
Fix KeyError: 'L3_ROUTER_NAT' in l3 scheduler functional test
The issue was introduced by commit with change ID:
Ic4df299bac83c80abf0890a81d2dd36d15993b33
In Neutron-speak we define the 'core plugin' as something like ML2, while
service plugins examples are L3, VPNaaS, etc. The code was initializing
the L3 service plugin as the core plugin, which is unexpected behavior.
The code will now use the DB core plugin base class as the core plugin,
and not initialize the service plugins. The tests will manually and locally
instantiate a L3 service plugin instance and use it.
This patch introduces database support for tracking Neutron
resource usage data. A single DB model class tracks usage
info for all neutron resources.
The patch also provides a simple API for managing resource
usage info, as well as unit tests providing coverage for
this API.
This patch also makes a slight change to the ContextBase
class, adding the ability to explicitly set is_advsvc at
initialization time. While this probably makes no difference
for practical use of the context class, it simplifies
development of DB-only unit tests.
rossella [Wed, 14 Jan 2015 14:11:36 +0000 (15:11 +0100)]
L2 agent RPC add new RPC calls
This patch introduces two new RPC calls:
* update_device_list
* get_devices_details_list_and_failed_devices
update_device_up and update_device_down are the RPC calls
used by the L2 agent to notify the plugin that
a device is up or down. These calls accept only one device.
update_device_list accepts a list of devices that will
be put up or down and returns a list of devices for which
the operation failed.
get_devices_details_list_and_failed_devices is a new RPC
call that returns a list of devices details and a list of
devices for which it was not possible to retrieve the
details.
Thanks to these new calls, the agent will be able to update
the status of several devices using one RPC call
and to retry the operation only for the devices that failed
instead of performing a full resync.
This will be implemented in another patch.
This patch introduces the new calls only for the OVS agent.
A following patch will take care of the other L2 agents.
This patch ensures log_opt_values is invoked only once, thus
avoiding annoying multiple dumps of options values.
Note that is rpc_workers is greater than 0 a second dump will
be logged. This patch does not address this as REST and RPC
servers are going to be separated as a part of the feature/pecan
work.
Pavel Bondar [Thu, 14 May 2015 11:06:08 +0000 (14:06 +0300)]
Add Pluggable IPAM Backend Part 2
Introduces new Pluggable IPAM backend.
IP/subnet allocation calls are sent to IPAM driver.
Calls to IPAM Driver are considered as call to third-party
environment, so if any action fails, rollback action is called.
Removes associate_neutron_subnet step from interface and reference
driver. It is not needed any more because foreign key relationship
between IPAM subnet and neutron subnet was removed.
So IPAM subnet can store id of neutron subnet,
which is not created yet.
For now only reference IPAM driver is available.
Temporarily disabled test_ipam_pluggable_backend from
gate-neutron-python34. Current patch adds executing
parts of test_db_base_plugin_v2, which is not py34 compatible yet.
Might be enabled back once 204791 is merged.
Fix handling of port-range-min 0 in secgroup RPC and agent
For TCP/UDP protocol, port valid range is 0 to 65535, so for a
security group rule, its valid range is also 0 to 65535. this
patch makes two changes:
1. if a security group rule port_range_min is 0, l2 agent also can
get port_range_min real value 0 when it gets this rule for a
device via RPC.
2. For IptablesFirewallDriver, if port range is [0, xxxx], l2 agent
also need add this rule to iptables.
This patch adds the address scope api tests and
also returns the address scope resource attr map
in the get_extended_resources() function which was
missed in the previous patch
Yalei Wang [Tue, 9 Jun 2015 05:46:57 +0000 (13:46 +0800)]
Add new ovs DB API to inquire interfaces name list in a bridge
In OVS, ports don't equal to interfaces when a bond port created. This patch
add the new API get_iface_name_list to get the interfaces' name, and it's
supplementary to the current get_port_name_list API.
There may be projects that have not being created yet, so revise
the wording a bit to provide clarity that new projects can also
submit for inclusion right off the bat.
sridhargaddam [Fri, 19 Jun 2015 16:17:01 +0000 (16:17 +0000)]
Fix race condition by using lock on enable_radvd
For an HA Router, radvd is spawned only in the Master Node.
KeepalivedStateChangeHandler after receiving the state notification
on the unix domain socket, would either enable/disable the radvd
process depending on the state of the HA Router. In parallel, if
there is a request to add an interface to the router, router.process()
would try to enable radvd in parallel. So there is a possible race
condition as two threads try to enable radvd in parallel. This patch
uses appropriate locking mechanism to prevent this race condition.
There is nothing Linux or agent specific in the function. I need to use
it outside agent code in one of depending patches, hence moving it into
better location while leaving the previous symbol in place, with
deprecation warning, for backwards compatibility.
Henry Gessau [Thu, 25 Jun 2015 17:58:30 +0000 (13:58 -0400)]
Remove deprecated OVS and LB plugin DB tables
Now that the openvswitch and linuxbridge monolithic plugins have been
deprecated and replaced by ML2 mechanism drivers, we can remove their
database tables.
Also remove the migrate_to_ml2.py script which is no longer supported.
Louis Taylor [Mon, 20 Jul 2015 17:20:29 +0000 (17:20 +0000)]
Port help text for dvr_base_mac from neutron.conf
The help text for the option in the current static configuration file
contained additional information which would be lost when we switch to
generating the file. This patch copies it over to the oslo.config help
parameter.
Adds garp_master_repeat and garp_master_refresh to keepalived.conf
This patch aims to solve an issue when, in some cases, the L3 agent might
start before the OVS agent during a node boot process.
At such scenario, the L3 agent will configure the routers before the OVS agent
done syncing with the server and starts processing ovsdb monitor updates.
The result:
1. L3 agent finishes configuring an HA router
2. Keepalived spawned
3. In case of transition to master state, it (keepalived) sends gratuitous ARPs
^---The issue
4. The OVS agent has just finished plugging its ports
Meaning, the gratuitous ARP will be lost, but the router acting as master.
Closes-Bug: #1453855