If the neutron server erroneously thinks than the l2 agent is down
it will fail to bind a port, which can lead to VM's spawn errors.
However, the issue is only transient because the agent effectively
is only 'late' in reporting back.
Best solution would be an alpha-count algorithm (so that we can detect
persistent failures more reliably), but for now let's be more tolerant
assuming that the agent is down by waiting at least twice the report
interval plus a tiny teeny bit.
Change-Id: I544135ce1f6b7eaefb34ac44af8f5844d92ddd95
Close-bug: #
1244255
# max_fixed_ips_per_port = 5
# =========== items for agent management extension =============
-# Seconds to regard the agent as down.
-# agent_down_time = 5
+# Seconds to regard the agent as down; should be at least twice
+# report_interval, to be sure the agent is down for good
+# agent_down_time = 9
# =========== end of items for agent management extension =====
# =========== items for agent scheduler extension =============
# root_helper = sudo
# =========== items for agent management extension =============
-# seconds between nodes reporting state to server, should be less than
-# agent_down_time
+# seconds between nodes reporting state to server; should be less than
+# agent_down_time, best if it is half or less than agent_down_time
# report_interval = 4
# =========== end of items for agent management extension =====
AGENT_STATE_OPTS = [
cfg.FloatOpt('report_interval', default=4,
- help=_('Seconds between nodes reporting state to server')),
+ help=_('Seconds between nodes reporting state to server; '
+ 'should be less than agent_down_time, best if it '
+ 'is half or less than agent_down_time.')),
]
LOG = logging.getLogger(__name__)
cfg.CONF.register_opt(
- cfg.IntOpt('agent_down_time', default=5,
- help=_("Seconds to regard the agent is down.")))
+ cfg.IntOpt('agent_down_time', default=9,
+ help=_("Seconds to regard the agent is down; should be at "
+ "least twice report_interval, to be sure the "
+ "agent is down for good.")))
class Agent(model_base.BASEV2, models_v2.HasId):